Split Operations
When You Need This
Section titled “When You Need This”The high-level upload() handles single-piece multi-copy uploads end-to-end. Use split operations when you need:
- Batch uploading many files to specific providers without repeated context creation
- Custom error handling at each phase — retry store failures, skip failed secondaries, recover from commit failures
- Signing control to control the signing operations to avoid multiple wallet signature prompts during multi-copy uploads
- Greater provider/dataset targeting for uploading to known providers
upload() | Split Operations | |
|---|---|---|
| Control | Automatic | Manual per-phase |
| Error recovery | Re-upload on commit failure | Retry commit without re-upload |
| Batch files | One call per file | Store many, commit in batch |
| Wallet prompts | Managed internally | Control via presignForCommit() |
| Best for | Most use cases | Production pipelines, custom UX |
The Upload Pipeline
Section titled “The Upload Pipeline”Every upload goes through three phases:
store ──► pull ──► commit │ │ │ │ │ └─ On-chain: create dataset, add piece, start payments │ └─ SP-to-SP: secondary provider fetches from primary └─ Upload: bytes sent to one provider (no on-chain state yet)- store: Upload bytes to a single SP. Returns
{ pieceCid, size }. The piece is “parked” on the SP but not yet on-chain and subject to garbage collection if not committed. - pull: SP-to-SP transfer. The destination SP fetches the piece from a source SP. No client bandwidth used.
- commit: Submit an on-chain transaction to add the piece to a data set. Creates the data set and payment rail if needed.
Example Upload Flow
Section titled “Example Upload Flow”Store Phase
Section titled “Store Phase”Upload data to a provider without committing on-chain:
const const contexts: StorageContext[]
contexts = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.createContexts(options?: CreateContextsOptions): Promise<StorageContext[]>
createContexts({ CreateContextsOptions.copies?: number | undefined
copies: 2,})const [const primary: StorageContext
primary, const secondary: StorageContext
secondary] = const contexts: StorageContext[]
contexts
const { const pieceCid: PieceLink
pieceCid, const size: number
size } = await const primary: StorageContext
primary.StorageContext.store(data: UploadPieceStreamingData, options?: StoreOptions): Promise<StoreResult>
store(const data: Uint8Array<ArrayBuffer>
data, { StoreOptions.pieceCid?: PieceLink | undefined
pieceCid: const preCalculatedCid: PieceLink
preCalculatedCid, // skip expensive PieceCID (hash digest) calculation (optional) StoreOptions.signal?: AbortSignal | undefined
signal: const abortController: AbortController
abortController.AbortController.signal: AbortSignal
The signal read-only property of the AbortController interface returns an AbortSignal object instance, which can be used to communicate with/abort an asynchronous operation as desired.
signal, // cancellation (optional) StoreOptions.onProgress?: ((bytesUploaded: number) => void) | undefined
onProgress: (bytes: number
bytes) => { // progress callback (optional) var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Uploaded ${bytes: number
bytes} bytes`) },})
var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Stored: ${const pieceCid: PieceLink
pieceCid}, ${const size: number
size} bytes`)store() accepts Uint8Array or ReadableStream<Uint8Array>. Use streaming for large files to minimize memory.
After store completes, the piece is parked on the SP and can be:
- Retrieved via the context’s
getPieceUrl(pieceCid) - Pulled to other providers via
pull() - Committed on-chain via
commit()
Pull Phase (SP-to-SP Transfer)
Section titled “Pull Phase (SP-to-SP Transfer)”Request a secondary provider to fetch pieces from the primary:
// Pre-sign to avoid double wallet prompts during pull + commitconst const extraData: `0x${string}`
extraData = await const secondary: StorageContext
secondary.StorageContext.presignForCommit(pieces: Array<{ pieceCid: PieceCID; pieceMetadata?: MetadataObject;}>): Promise<Hex>
presignForCommit([{ pieceCid: PieceLink
pieceCid }])
const const pullResult: PullResult
pullResult = await const secondary: StorageContext
secondary.StorageContext.pull(options: PullOptions): Promise<PullResult>
pull({ PullOptions.pieces: PieceLink[]
pieces: [const pieceCid: PieceLink
pieceCid], PullOptions.from: PullSource
from: (cid: PieceLink
cid) => const primary: StorageContext
primary.StorageContext.getPieceUrl(pieceCid: PieceCID): string
getPieceUrl(cid: PieceLink
cid), // source URL builder (or URL string) PullOptions.extraData?: `0x${string}` | undefined
extraData, // pre-signed auth (optional, reused for commit) PullOptions.signal?: AbortSignal | undefined
signal: const abortController: AbortController
abortController.AbortController.signal: AbortSignal
The signal read-only property of the AbortController interface returns an AbortSignal object instance, which can be used to communicate with/abort an asynchronous operation as desired.
signal, // cancellation (optional) PullOptions.onProgress?: ((pieceCid: PieceCID, status: PullStatus) => void) | undefined
onProgress: (cid: PieceLink
cid, status: PullStatus
status) => { // status callback (optional) var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`${cid: PieceLink
cid}: ${status: PullStatus
status}`) },})
if (const pullResult: PullResult
pullResult.PullResult.status: "complete" | "failed"
status !== "complete") { for (const const piece: { pieceCid: PieceCID; status: "complete" | "failed";}
piece of const pullResult: PullResult
pullResult.PullResult.pieces: { pieceCid: PieceCID; status: "complete" | "failed";}[]
pieces) { if (const piece: { pieceCid: PieceCID; status: "complete" | "failed";}
piece.status: "complete" | "failed"
status === "failed") { var console: Console
console.Console.error(...data: any[]): void
The console.error() static method outputs a message to the console at the 'error' log level.
error(`Failed to pull ${const piece: { pieceCid: PieceCID; status: "complete" | "failed";}
piece.pieceCid: PieceLink
pieceCid}`) } }}The from parameter accepts either a URL string (base service URL) or a function that returns a piece URL for a given PieceCID.
Pre-signing: presignForCommit() creates an EIP-712 signature that can be reused for both pull() and commit(). This avoids prompting the wallet twice. Pass the same extraData to both calls.
Commit Phase
Section titled “Commit Phase”Add pieces to an on-chain data set. Creates the data set and payment rail if one doesn’t exist:
// Commit on both providersconst [const primaryCommit: PromiseSettledResult<CommitResult>
primaryCommit, const secondaryCommit: PromiseSettledResult<CommitResult>
secondaryCommit] = await var Promise: PromiseConstructor
Represents the completion of an asynchronous operation
Promise.PromiseConstructor.allSettled<[Promise<CommitResult>, Promise<CommitResult>]>(values: [Promise<CommitResult>, Promise<CommitResult>]): Promise<[PromiseSettledResult<CommitResult>, PromiseSettledResult<CommitResult>]> (+1 overload)
Creates a Promise that is resolved with an array of results when all
of the provided Promises resolve or reject.
allSettled([ const primary: StorageContext
primary.StorageContext.commit(options: CommitOptions): Promise<CommitResult>
commit({ CommitOptions.pieces: { pieceCid: PieceCID; pieceMetadata?: MetadataObject;}[]
pieces: [{ pieceCid: PieceLink
pieceCid, pieceMetadata?: MetadataObject | undefined
pieceMetadata: { filename: string
filename: "doc.pdf" } }], CommitOptions.onSubmitted?: ((txHash: Hex) => void) | undefined
onSubmitted: (txHash: `0x${string}`
txHash) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Transaction submitted: ${txHash: `0x${string}`
txHash}`) }, }), const secondary: StorageContext
secondary.StorageContext.commit(options: CommitOptions): Promise<CommitResult>
commit({ CommitOptions.pieces: { pieceCid: PieceCID; pieceMetadata?: MetadataObject;}[]
pieces: [{ pieceCid: PieceLink
pieceCid, pieceMetadata?: MetadataObject | undefined
pieceMetadata: { filename: string
filename: "doc.pdf" } }], CommitOptions.extraData?: `0x${string}` | undefined
extraData, // pre-signed auth from presignForCommit() (optional) CommitOptions.onSubmitted?: ((txHash: Hex) => void) | undefined
onSubmitted: (txHash: `0x${string}`
txHash) => { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Transaction submitted: ${txHash: `0x${string}`
txHash}`) }, })])
if (const primaryCommit: PromiseSettledResult<CommitResult>
primaryCommit.status: "rejected" | "fulfilled"
status === "fulfilled") { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Primary: dataSet=${const primaryCommit: PromiseFulfilledResult<CommitResult>
primaryCommit.PromiseFulfilledResult<CommitResult>.value: CommitResult
value.CommitResult.dataSetId: bigint
dataSetId}`)}if (const secondaryCommit: PromiseSettledResult<CommitResult>
secondaryCommit.status: "rejected" | "fulfilled"
status === "fulfilled") { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Secondary: dataSet=${const secondaryCommit: PromiseFulfilledResult<CommitResult>
secondaryCommit.PromiseFulfilledResult<CommitResult>.value: CommitResult
value.CommitResult.dataSetId: bigint
dataSetId}`)}The result:
txHash— transaction hashpieceIds— assigned piece IDs (one per input piece)dataSetId— data set ID (may be newly created)isNewDataSet— whether a new data set was created
Multi-File Batch Example
Section titled “Multi-File Batch Example”Upload multiple files to 2 providers with full error handling:
import { class Synapse
Synapse, type type PieceCID = Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>
PieceCID } from "@filoz/synapse-sdk"import { function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount } from "viem/accounts"
const const synapse: Synapse
synapse = class Synapse
Synapse.Synapse.create(options: SynapseOptions): Synapse
create({ SynapseOptions.account: `0x${string}` | Account
account: function privateKeyToAccount(privateKey: Hex, options?: PrivateKeyToAccountOptions): PrivateKeyAccount
privateKeyToAccount("0x.."), SynapseOptions.source: string | null
source: "my-app" })
const const files: Uint8Array<ArrayBuffer>[]
files = [ new var TextEncoder: new () => TextEncoder
The TextEncoder interface takes a stream of code points as input and emits a stream of UTF-8 bytes.
TextEncoder().TextEncoder.encode(input?: string): Uint8Array<ArrayBuffer>
The TextEncoder.encode() method takes a string as input, and returns a Global_Objects/Uint8Array containing the text given in parameters encoded with the specific method for that TextEncoder object.
encode("File 1 content..."), new var TextEncoder: new () => TextEncoder
The TextEncoder interface takes a stream of code points as input and emits a stream of UTF-8 bytes.
TextEncoder().TextEncoder.encode(input?: string): Uint8Array<ArrayBuffer>
The TextEncoder.encode() method takes a string as input, and returns a Global_Objects/Uint8Array containing the text given in parameters encoded with the specific method for that TextEncoder object.
encode("File 2 content..."), new var TextEncoder: new () => TextEncoder
The TextEncoder interface takes a stream of code points as input and emits a stream of UTF-8 bytes.
TextEncoder().TextEncoder.encode(input?: string): Uint8Array<ArrayBuffer>
The TextEncoder.encode() method takes a string as input, and returns a Global_Objects/Uint8Array containing the text given in parameters encoded with the specific method for that TextEncoder object.
encode("File 3 content..."),]
// Create contexts for 2 providersconst [const primary: StorageContext
primary, const secondary: StorageContext
secondary] = await const synapse: Synapse
synapse.Synapse.storage: StorageManager
storage.StorageManager.createContexts(options?: CreateContextsOptions): Promise<StorageContext[]>
createContexts({ CreateContextsOptions.copies?: number | undefined
copies: 2, BaseContextOptions.metadata?: Record<string, string> | undefined
metadata: { source: string
source: "batch-upload" },})
// Store all files on primary (note: these could be done in parallel w/ Promise.all)const const stored: { pieceCid: PieceCID; size: number;}[]
stored: { pieceCid: PieceLink
pieceCid: type PieceCID = Link<MerkleTreeNode, RAW_CODE, MulticodecCode<4113, "fr32-sha2-256-trunc254-padded-binary-tree">, 1>
PieceCID; size: number
size: number }[] = []for (const const file: Uint8Array<ArrayBuffer>
file of const files: Uint8Array<ArrayBuffer>[]
files) { const const result: StoreResult
result = await const primary: StorageContext
primary.StorageContext.store(data: UploadPieceStreamingData, options?: StoreOptions): Promise<StoreResult>
store(const file: Uint8Array<ArrayBuffer>
file) const stored: { pieceCid: PieceCID; size: number;}[]
stored.Array<{ pieceCid: PieceCID; size: number; }>.push(...items: { pieceCid: PieceCID; size: number;}[]): number
Appends new elements to the end of an array, and returns the new length of the array.
push(const result: StoreResult
result) var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Stored ${const result: StoreResult
result.StoreResult.pieceCid: PieceLink
pieceCid}`)}
// Pre-sign for all pieces on secondaryconst const pieceCids: PieceLink[]
pieceCids = const stored: { pieceCid: PieceCID; size: number;}[]
stored.Array<{ pieceCid: PieceCID; size: number; }>.map<PieceLink>(callbackfn: (value: { pieceCid: PieceCID; size: number;}, index: number, array: { pieceCid: PieceCID; size: number;}[]) => PieceLink, thisArg?: any): PieceLink[]
Calls a defined callback function on each element of an array, and returns an array that contains the results.
map(s: { pieceCid: PieceCID; size: number;}
s => s: { pieceCid: PieceCID; size: number;}
s.pieceCid: PieceLink
pieceCid)const const extraData: `0x${string}`
extraData = await const secondary: StorageContext
secondary.StorageContext.presignForCommit(pieces: Array<{ pieceCid: PieceCID; pieceMetadata?: MetadataObject;}>): Promise<Hex>
presignForCommit( const pieceCids: PieceLink[]
pieceCids.Array<PieceLink>.map<{ pieceCid: PieceLink;}>(callbackfn: (value: PieceLink, index: number, array: PieceLink[]) => { pieceCid: PieceLink;}, thisArg?: any): { pieceCid: PieceLink;}[]
Calls a defined callback function on each element of an array, and returns an array that contains the results.
map(cid: PieceLink
cid => ({ pieceCid: PieceLink
pieceCid: cid: PieceLink
cid })))
// Pull all pieces to secondaryconst const pullResult: PullResult
pullResult = await const secondary: StorageContext
secondary.StorageContext.pull(options: PullOptions): Promise<PullResult>
pull({ PullOptions.pieces: PieceLink[]
pieces: const pieceCids: PieceLink[]
pieceCids, PullOptions.from: PullSource
from: (cid: PieceLink
cid) => const primary: StorageContext
primary.StorageContext.getPieceUrl(pieceCid: PieceCID): string
getPieceUrl(cid: PieceLink
cid), PullOptions.extraData?: `0x${string}` | undefined
extraData,})
// Commit on both providersconst [const primaryCommit: PromiseSettledResult<CommitResult>
primaryCommit, const secondaryCommit: PromiseSettledResult<CommitResult>
secondaryCommit] = await var Promise: PromiseConstructor
Represents the completion of an asynchronous operation
Promise.PromiseConstructor.allSettled<[Promise<CommitResult>, Promise<CommitResult>]>(values: [Promise<CommitResult>, Promise<CommitResult>]): Promise<[PromiseSettledResult<CommitResult>, PromiseSettledResult<CommitResult>]> (+1 overload)
Creates a Promise that is resolved with an array of results when all
of the provided Promises resolve or reject.
allSettled([ const primary: StorageContext
primary.StorageContext.commit(options: CommitOptions): Promise<CommitResult>
commit({ CommitOptions.pieces: { pieceCid: PieceCID; pieceMetadata?: MetadataObject;}[]
pieces: const pieceCids: PieceLink[]
pieceCids.Array<PieceLink>.map<{ pieceCid: PieceLink;}>(callbackfn: (value: PieceLink, index: number, array: PieceLink[]) => { pieceCid: PieceLink;}, thisArg?: any): { pieceCid: PieceLink;}[]
Calls a defined callback function on each element of an array, and returns an array that contains the results.
map(cid: PieceLink
cid => ({ pieceCid: PieceLink
pieceCid: cid: PieceLink
cid })) }), const pullResult: PullResult
pullResult.PullResult.status: "complete" | "failed"
status === "complete" ? const secondary: StorageContext
secondary.StorageContext.commit(options: CommitOptions): Promise<CommitResult>
commit({ CommitOptions.pieces: { pieceCid: PieceCID; pieceMetadata?: MetadataObject;}[]
pieces: const pieceCids: PieceLink[]
pieceCids.Array<PieceLink>.map<{ pieceCid: PieceLink;}>(callbackfn: (value: PieceLink, index: number, array: PieceLink[]) => { pieceCid: PieceLink;}, thisArg?: any): { pieceCid: PieceLink;}[]
Calls a defined callback function on each element of an array, and returns an array that contains the results.
map(cid: PieceLink
cid => ({ pieceCid: PieceLink
pieceCid: cid: PieceLink
cid })), CommitOptions.extraData?: `0x${string}` | undefined
extraData }) : var Promise: PromiseConstructor
Represents the completion of an asynchronous operation
Promise.PromiseConstructor.reject<never>(reason?: any): Promise<never>
Creates a new rejected promise for the provided reason.
reject(new var Error: ErrorConstructornew (message?: string, options?: ErrorOptions) => Error (+1 overload)
Error("Pull failed, skipping secondary commit")), // not advised!])
if (const primaryCommit: PromiseSettledResult<CommitResult>
primaryCommit.status: "rejected" | "fulfilled"
status === "fulfilled") { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Primary: dataSet=${const primaryCommit: PromiseFulfilledResult<CommitResult>
primaryCommit.PromiseFulfilledResult<CommitResult>.value: CommitResult
value.CommitResult.dataSetId: bigint
dataSetId}`)}if (const secondaryCommit: PromiseSettledResult<CommitResult>
secondaryCommit.status: "rejected" | "fulfilled"
status === "fulfilled") { var console: Console
console.Console.log(...data: any[]): void
The console.log() static method outputs a message to the console.
log(`Secondary: dataSet=${const secondaryCommit: PromiseFulfilledResult<CommitResult>
secondaryCommit.PromiseFulfilledResult<CommitResult>.value: CommitResult
value.CommitResult.dataSetId: bigint
dataSetId}`)}Error Handling Patterns
Section titled “Error Handling Patterns”Each phase’s errors are independent. Failures don’t cascade — you can retry at any level:
| Phase | Failure | Data state | Recovery |
|---|---|---|---|
| store | Upload/network error | No data on SP | Retry store() with same or different context |
| pull | SP-to-SP transfer failed | Data on primary only | Retry pull(), try different secondary, or skip |
| commit | On-chain transaction failed | Data on SP but not on-chain | Retry commit() (no re-upload needed) |
The key advantage of split operations: if commit fails, data is already stored on the SP. You can retry commit() without re-uploading the data. With the high-level upload(), a CommitError would require re-uploading.
For maximum control, use the core library functions without the SDK wrapper classes. This is useful for building custom upload pipelines, integrating into existing frameworks, or server-side applications that don’t need the SDK’s orchestration.
Next Steps
Section titled “Next Steps”-
Storage Operations — The high-level multi-copy upload API for most use cases. Start here if you haven’t used
synapse.storage.upload()yet. -
Calculate Storage Costs — Plan your budget and fund your storage account. Use the quick calculator to estimate monthly costs.
-
Payments & Storage — Deep dive into the payment model, deposit calculation, and rate mechanics. Recommended for understanding how split operations interact with the payment system.
-
Payment Management — Manage deposits, approvals, and payment rails. Required before your first upload.