Introduction
This document proposes the Iterable Streams API, a streaming interface designed around the following principles:
-
Streams are iterables: An async stream is an
AsyncIterable<Uint8Array[]>; a sync stream is anIterable<Uint8Array[]>. There is no custom stream class. Consumption usesfor await...ofandfor...of. -
Batched chunks: Iterables yield
Uint8Array[](arrays of chunks) rather than individual chunks, amortizing the cost of each async iteration tick. -
Bytes only: All streams carry
Uint8Arraydata exclusively. Strings are automatically UTF-8 encoded. -
Pull-through transforms: Transform pipelines are lazy. No data flows until the consumer pulls.
-
Explicit backpressure: Backpressure is controlled by a configurable backpressure policy (
"strict","block","drop-oldest","drop-newest"). -
Clean sync/async separation: Synchronous and asynchronous APIs are fully separated.
-
Explicit multi-consumer: Sharing a stream among multiple consumers requires explicit opt-in via broadcast channel (push model) or shared source (pull model).
This specification is intended for use by web-interoperable runtimes as defined by WinterTC, though any ECMAScript-based runtime may implement it.
Copyright
© 2026 Ecma International
Permission under Ecma’s copyright to copy, modify, prepare derivative works of, and distribute this work, with or without modification, for any purpose and without fee or royalty is hereby granted, provided that you include the full text of this copyright notice on ALL copies of the work or portions thereof.
THIS WORK IS PROVIDED "AS IS," AND COPYRIGHT HOLDERS MAKE NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO, WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE DOCUMENT WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS.
1. Scope
This proposal defines the Iterable Streams API, a bytes-only streaming interface built on the ECMAScript iteration protocols. It specifies:
-
A
Streamnamespace providing factory functions for creating, transforming, consuming, and composing byte streams. -
A
Writerinterface for push-based data production with explicit backpressure. -
Transform pipeline semantics based on pull-through lazy evaluation.
-
Multi-consumer patterns for sharing a single source across multiple consumers.
-
Bidirectional duplex channels for full-duplex communication.
-
Protocol symbols enabling user-defined objects to participate in streaming.
2. Conformance
A conforming implementation of this specification shall provide the namespace, interfaces, and functions listed herein. A conforming implementation shall also conform to [ECMASCRIPT] and [WEBIDL].
Conforming implementations must support AbortSignal and AbortController as defined in [DOM].
Note: AbortSignal and AbortController are the only normative dependency this specification has on the DOM Standard. They are required because ECMAScript does not yet define a language-level cancellation protocol. Should TC39 adopt such a protocol in the future, a subsequent revision of this specification may remove the DOM dependency in favor of the language-native mechanism.
3. Normative references
The following documents are referred to in the text in such a way that some or all of their content constitutes requirements of this document.
References
Normative References
- [DOM]
- Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
- [ECMASCRIPT]
- ECMAScript Language Specification. URL: https://tc39.es/ecma262/multipage/
- [ENCODING]
- Anne van Kesteren. Encoding Standard. Living Standard. URL: https://encoding.spec.whatwg.org/
- [WEBIDL]
- Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/
Informative References
- [STREAMS]
- Adam Rice; et al. Streams Standard. Living Standard. URL: https://streams.spec.whatwg.org/
4. Terms and definitions
For the purposes of this document, the terms and definitions given in [ECMASCRIPT], the DOM Standard [DOM], the Encoding Standard [ENCODING], and the following apply.
4.1. web-interoperable runtime
ECMAScript-based runtime environment as defined by WinterTC
4.2. batched chunks
an array of Uint8Array objects (Uint8Array[]) representing a group of byte chunks yielded as a single iteration step to amortize the cost of async iteration
4.3. backpressure policy
a BackpressurePolicy value controlling the behavior when an internal buffer is full
4.4. push stream
a stream created by push() consisting of a bonded Writer and async iterable pair, where data written to the writer flows to the iterable
4.5. pull pipeline
a lazy transform chain created by pull() or pullSync() that only processes data when the consumer iterates
4.6. broadcast channel
a push-model multi-consumer pattern where data written to a single Writer is delivered to all subscribed consumers
4.7. shared source
a pull-model multi-consumer pattern where a single source is consumed on-demand as multiple consumers pull data, with a shared buffer for slower consumers to catch up
5. Core concepts
5.1. Streams as iterables
In this API, a stream is not a custom class. It is a standard ECMAScript iterable. An async byte stream conforms to the AsyncIterable<Uint8Array[]> protocol, and a sync byte stream conforms to the Iterable<Uint8Array[]> protocol. Consumers iterate streams using for await...of or for...of.
5.2. Batched chunks
Every iteration step yields a Uint8Array[] (an array of one or more chunks) rather than a single Uint8Array. This batching amortizes the per-tick cost of async iteration across multiple chunks.
Consumers should iterate the inner array:
for await ( const chunksof readable) { for ( const chunkof chunks) { // Process individual Uint8Array chunk } }
5.3. Backpressure
Backpressure is the mechanism by which a consumer signals to a producer that it should slow down. In this API, backpressure is managed through backpressure policies:
-
**
"strict"** (default): Catches producers that ignore backpressure. Properly awaited writes wait for buffer space; un-awaited writes exceeding the buffer limit cause a rejection. Both the slots buffer and pending writes queue are limited byhighWaterMark. -
**
"block"**: Async writes wait for buffer space. Sync writes returnfalsewhen the buffer is full. The pending writes queue is unbounded. -
**
"drop-oldest"**: Oldest buffered data is discarded to make room for new data. Writes never block. -
**
"drop-newest"**: Incoming data is silently discarded when buffer is full. Writes never block.
The highWaterMark option controls the buffer size in slots (chunk batches), not bytes. It is clamped to a minimum of 1. Implementations may also apply a reasonable upper limit; the specific maximum is implementation-defined, but must be at least 1024.
5.4. Pull and push models
The pull model is fundamental; the push model builds on it by bonding a Writer to a pull-based iterable.
-
Pull model: Data flows only when the consumer iterates, pulling through any transform chain back to the source. Created via
pull(),from(), andshare(). -
Push model: A
Writerbonded to a pull-based iterable, allowing a producer to write data that consumers pull via iteration. Created viapush()andbroadcast().
Transforms are strictly lazy in both models: no transform function executes until the consumer iterates.
5.5. Try-fallback pattern
The Writer interface provides synchronous variants of its write and end methods (writeSync(), writevSync(), endSync()) alongside the async versions. The intended usage is a try-fallback pattern: attempt the synchronous method first, and if it indicates failure, fall back to the async version. (fail() is unconditionally synchronous and does not participate in this pattern.)
For writeSync() and writevSync(), a return value of false indicates the synchronous attempt was not accepted. For endSync(), a return value of −1 indicates the same. In either case, the caller should fall back to the async method and await it:
// Try sync, fall back to async if ( ! writer. writeSync( chunk)) { await writer. write( chunk); } // End with try-fallback const n= writer. endSync(); if ( n< 0 ) { await writer. end(); }
This pattern enables high-performance paths where synchronous writes succeed (e.g., buffer has space), while still handling backpressure correctly when they do not. The Writer synchronous methods never throw on backpressure; they return a failure indicator, leaving the caller to decide whether to await, retry, or drop. (SyncWriter has different semantics; see § 6.5 The SyncWriter interface.)
6. Web IDL definitions
6.1. BackpressurePolicy
enum {BackpressurePolicy ,"strict" ,"block" ,"drop-oldest" };"drop-newest"
The BackpressurePolicy enum defines the backpressure policy options described in § 5.3 Backpressure.
6.2. Stream type aliases
typedef object ;ByteReadableStream typedef object ;SyncByteReadableStream
A ByteReadableStream is an object conforming to the AsyncIterable<Uint8Array[]> protocol: it has a Symbol.asyncIterator method that returns an async iterator yielding batched chunks.
A SyncByteReadableStream is an object conforming to the Iterable<Uint8Array[]> protocol: it has a Symbol.iterator method that returns an iterator yielding batched chunks.
These are structural types defined by the ECMAScript iteration protocols, not interfaces with a prototype chain. The typedef object declaration is used because Web IDL cannot directly express parameterized iterable types as values. See the editorial note on any types for further discussion.
6.3. Options dictionaries
dictionary {WriteOptions AbortSignal ; };signal dictionary {PushStreamOptions unsigned long = 4;highWaterMark BackpressurePolicy = "strict";backpressure AbortSignal ; };signal dictionary {PullOptions AbortSignal ; };signal dictionary {PipeToOptions AbortSignal ;signal boolean =preventClose false ;boolean =preventFail false ; };dictionary {PipeToSyncOptions boolean =preventClose false ;boolean =preventFail false ; };dictionary {ConsumeOptions AbortSignal ; [signal EnforceRange ]unsigned long long ; };limit dictionary { [ConsumeSyncOptions EnforceRange ]unsigned long long ; };limit dictionary :TextConsumeOptions ConsumeOptions {DOMString = "utf-8"; };encoding dictionary :TextConsumeSyncOptions ConsumeSyncOptions {DOMString = "utf-8"; };encoding
This specification defines "utf-8" as the only normative value for the encoding and encoding members. Implementations may support additional encodings as defined by [ENCODING], but only "utf-8" is required for conformance. If an unsupported encoding is provided, the implementation must throw a RangeError.
dictionary {MergeOptions AbortSignal ; };signal dictionary {BroadcastOptions unsigned long = 16;highWaterMark BackpressurePolicy = "strict";backpressure AbortSignal ; };signal dictionary {ShareOptions unsigned long = 16;highWaterMark BackpressurePolicy = "strict";backpressure AbortSignal ; };signal dictionary {ShareSyncOptions unsigned long = 16;highWaterMark BackpressurePolicy = "strict"; };backpressure
Note: The default highWaterMark for BroadcastOptions, ShareOptions, and ShareSyncOptions is 16, compared to 4 for PushStreamOptions and DuplexOptions. Multi-consumer patterns benefit from a larger buffer because the slowest consumer governs backpressure, so a larger buffer provides more headroom before fast consumers are stalled by a slow one.
dictionary {DuplexDirectionOptions unsigned long ;highWaterMark BackpressurePolicy ; };backpressure dictionary {DuplexOptions unsigned long = 4;highWaterMark BackpressurePolicy = "strict";backpressure DuplexDirectionOptions ;a DuplexDirectionOptions ;b AbortSignal ; };signal dictionary {TransformCallbackOptions required AbortSignal ; };signal callback =StatelessTransformFn any (sequence <Uint8Array >?,chunks TransformCallbackOptions );options callback =SyncStatelessTransformFn any (sequence <Uint8Array >?);chunks dictionary {PushStreamResult required Writer ;writer required ByteReadableStream ; };readable dictionary {BroadcastResult required Writer ;writer required Broadcast ; };broadcast
6.4. The Writer interface
interface {Writer readonly attribute long ?desiredSize ;Promise <undefined >write ((Uint8Array or USVString ),chunk optional WriteOptions = {});options Promise <undefined >writev (sequence <(Uint8Array or USVString )>,chunks optional WriteOptions = {});options boolean writeSync ((Uint8Array or USVString ));chunk boolean writevSync (sequence <(Uint8Array or USVString )>);chunks Promise <unsigned long long >end (optional WriteOptions = {});options long long endSync ();undefined fail (optional any ); };reason
The Writer interface is the API for producing data. See § 7.2 The Writer interface for full semantics.
Note: Writer is an interface, not a concrete class. Any object implementing this interface can serve as a writer. Implementations should support both Symbol.asyncDispose and Symbol.dispose (calling fail() with no argument), enabling both await using and using syntax.
6.5. The SyncWriter interface
interface {SyncWriter readonly attribute long ?;desiredSize boolean ((writeSync Uint8Array or USVString ));chunk boolean (writevSync sequence <(Uint8Array or USVString )>);chunks unsigned long long ();endSync undefined (fail optional any ); };reason
The SyncWriter interface is a synchronous interface for producing data. Writer is a superset of SyncWriter: any object implementing Writer that also provides the Sync-suffixed methods satisfies SyncWriter. This means a single object can serve as both a Writer (for pipeTo()) and a SyncWriter (for pipeToSync()).
Like Writer, SyncWriter is an interface, not a concrete class. Implementations should support Symbol.dispose (calling fail() with no argument), enabling using syntax.
SyncWriter follows the same backpressure policies as Writer with adaptations for synchronous operation:
-
With
"block",writeSync()andwritevSync()always enqueue the chunk(s) and returntruewhen the buffer has space, or enqueue and returnfalsewhen the buffer is full. Thefalsereturn is a backpressure signal; the data is still accepted, but the caller should slow down. -
With
"strict", writes that exceed the buffer capacity throw aRangeError. -
With
"drop-oldest"and"drop-newest", writes behave as described in § 5.3 Backpressure: the oldest or newest data is discarded respectively. Writes never fail. -
endSync()throws aTypeErrorif the writer is already closed or errored. -
fail()transitions the writer to the errored state. If the writer is already closed or errored, it is a no-op.
6.6. The Broadcast interface
interface {Broadcast readonly attribute unsigned long ;consumerCount readonly attribute unsigned long ;bufferSize ByteReadableStream push (any ...);args undefined cancel (optional any ); };reason
The Broadcast interface is the consumer-facing side of a broadcast channel. The push() method accepts an optional sequence of transforms followed by an optional PullOptions dictionary, and returns a ByteReadableStream. See § 13.1 Broadcast for full semantics.
Note: Broadcast is an interface, not a concrete class. Any object conforming to this interface can serve as a broadcast. Implementations should support Symbol.dispose, calling cancel() with no argument, enabling using declarations.
6.7. The Share interface
interface {Share readonly attribute unsigned long ;consumerCount readonly attribute unsigned long ;bufferSize ByteReadableStream pull (any ...);args undefined cancel (optional any ); };reason
The Share interface is the consumer-facing side of a shared source. The pull() method accepts an optional sequence of transforms followed by an optional PullOptions dictionary, and returns an object conforming to AsyncIterable<Uint8Array[]>. See § 13.2 Share for full semantics.
Note: Share is an interface, not a concrete class. Any object conforming to this interface can serve as a share. Implementations should support Symbol.dispose, calling cancel() with no argument.
6.8. The SyncShare interface
interface {SyncShare readonly attribute unsigned long ;consumerCount readonly attribute unsigned long ;bufferSize SyncByteReadableStream (pull any ...);args undefined (cancel optional any ); };reason
The SyncShare interface is the synchronous counterpart of Share. pull() accepts an optional sequence of sync transforms and returns an object conforming to Iterable<Uint8Array[]>.
Note: Like Share, SyncShare is an interface, not a concrete class. Implementations should support Symbol.dispose, calling cancel() with no argument.
6.9. The DuplexChannel interface
interface {DuplexChannel readonly attribute Writer ;writer readonly attribute ByteReadableStream ;readable Promise <undefined >(); };close
The DuplexChannel interface represents one end of a bidirectional communication channel. The readable attribute is an object conforming to AsyncIterable<Uint8Array[]>.
Note: DuplexChannel is an interface, not a concrete class. Runtimes and applications may provide their own implementations (e.g., wrapping native sockets or WebSockets) as long as they conform to this interface. DuplexChannel implements Symbol.asyncDispose, enabling await using syntax for automatic cleanup.
6.10. The Stream namespace
[Exposed=*]namespace { /* Push stream creation */Stream PushStreamResult push (any ...); /* Stream factories */args ByteReadableStream from (any );input SyncByteReadableStream fromSync (any ); /* Pull pipelines */input ByteReadableStream pull (any ,source any ...);args SyncByteReadableStream pullSync (any ,source any ...); /* Pipe operations */args Promise <unsigned long long >pipeTo (any ,source any ...);args unsigned long long pipeToSync (any ,source any ...); /* Consumers */args Promise <Uint8Array >bytes (any ,source optional ConsumeOptions = {});options Uint8Array bytesSync (any ,source optional ConsumeSyncOptions = {});options Promise <USVString >text (any ,source optional TextConsumeOptions = {});options USVString textSync (any ,source optional TextConsumeSyncOptions = {});options Promise <ArrayBuffer >arrayBuffer (any ,source optional ConsumeOptions = {});options ArrayBuffer arrayBufferSync (any ,source optional ConsumeSyncOptions = {});options Promise <sequence <Uint8Array >>array (any ,source optional ConsumeOptions = {});options sequence <Uint8Array >arraySync (any ,source optional ConsumeSyncOptions = {}); /* Utilities */options StatelessTransformFn tap (any );callback SyncStatelessTransformFn tapSync (any );callback ByteReadableStream merge (any ...);args Promise <boolean >?ondrain (any ); /* Multi-consumer */drainable BroadcastResult broadcast (optional BroadcastOptions = {});options Share share (any ,source optional ShareOptions = {});options SyncShare shareSync (any ,source optional ShareSyncOptions = {}); /* Duplex */options sequence <DuplexChannel >duplex (optional DuplexOptions = {}); };options
Note: Many methods in the Stream namespace accept variadic arguments where the final argument may be an options dictionary and preceding arguments are transforms. The argument parsing for each method is described in its respective section.
Several methods in this specification use any for parameter types where more precise types would be desirable. This is a limitation of Web IDL, not of the API itself.
Parameters typed as any source or any input accept any object that from() can normalize (iterables, strings, ArrayBuffers, or objects implementing the toStreamable protocol), which is not expressible as a single Web IDL type. Similarly, variadic any... args parameters accept a mix of transforms and options dictionaries whose parsing is described in prose.
Return types use named types where possible: ByteReadableStream and SyncByteReadableStream for stream-returning methods, PushStreamResult and BroadcastResult for factory methods, and concrete interface types like Share, SyncShare, and DuplexChannel elsewhere. These named types are defined as typedef object because the ECMAScript iteration protocols are structural, and Web IDL cannot express parameterized iterable types as values. In all cases, the prose algorithm for each method defines the actual types accepted and returned.
7. Push streams
7.1. Stream.push()
push(...args) method creates a push stream, a bonded Writer and AsyncIterable<Uint8Array[]> pair.
When called, it performs the following steps:
- Let transforms be an empty list and options be an empty
PushStreamOptions. - Parse variadic transform arguments from args into transforms and options.
- Let highWaterMark be the result of clamping options["
highWaterMark"] to the range [1, implementation-defined maximum]. - Let backpressure be options["
backpressure"]. - Let signal be options["
signal"] if present; otherwiseundefined. - Create an internal slots buffer with capacity highWaterMark.
- Create a pending writes queue. If backpressure is
"strict", limit its capacity to highWaterMark. - Let writer be a new
Writerbacked by the internal buffer, the pending writes queue, the backpressure policy backpressure, and signal if present. - Let pipelineController be a new
AbortController. If signal is notundefined, set pipelineController’s signal to follow signal. - Let bufferIterable be an async iterable that dequeues the next batch from the slots buffer on each step (waiting if empty).
- Let readable be the result of compose transform pipeline with bufferIterable, transforms, and a
TransformCallbackOptionswith signal set to pipelineController’s signal. On return or throw (consumer stops iterating), abort pipelineController and signal cancellation to writer. - If signal is not
undefinedand signal is aborted, immediately put writer into the failed state with signal’s abort reason. - Return «[ "writer" → writer, "readable" → readable ]».
The returned readable is an async iterable yielding batched chunks. There is no sync variant of push() because push streams are inherently asynchronous.
Note: Transforms passed to push() are applied lazily when the consumer pulls, not when the producer writes.
7.2. The Writer interface
A Writer provides the interface for producing data. Implementations of Writer are returned by push(), broadcast(), and duplex(), but any object conforming to this interface can serve as a writer.
7.2.1. Writer.desiredSize
desiredSize attribute, on getting, returns null if the writer is closed or errored. Otherwise it returns the number of available slots in the internal buffer (i.e., highWaterMark minus the number of occupied slots), which is always ≥ 0.
desiredSize reflects capacity in the slots buffer only; it does not account for writes waiting in the pending writes queue. When the backpressure policy is "block", a desiredSize of 0 means the slots buffer is full, regardless of how many additional writes are queued in the pending writes queue behind it. It is not a measurement of total queued items.
Note: Unlike the WHATWG Streams API, desiredSize is never negative. A value of 0 is the signal that the buffer is full; producers should await their writes or use ondrain() before writing more.
7.2.2. Writer.write()
write(chunk, options) method writes a single chunk. It performs the following steps:
- If chunk is a
USVString, set chunk to the result of UTF-8 encoding chunk. - If the writer is closed, return a promise rejected with a
TypeError. - If the writer is errored, return a promise rejected with the stored error.
- If options["
signal"] is present and aborted, return a promise rejected with its abort reason. - Let batch be « chunk ».
- If the slots buffer has space, enqueue batch, notify drain waiters, and return a promise resolved with
undefined. -
The slots buffer is full. Proceed based on the backpressure policy:
- If
"drop-oldest": dequeue the oldest batch, enqueue batch, and return a promise resolved withundefined. - If
"drop-newest": discard batch and return a promise resolved withundefined. - If
"strict": if the pending writes queue is at capacity, return a promise rejected with aRangeError. Otherwise, add the write to the pending writes queue and return a promise that resolves when the batch is transferred to the slots buffer. - If
"block": add the write to the pending writes queue and return a promise that resolves when the batch is transferred to the slots buffer.
- If
- If the write was added to the pending writes queue and options["
signal"] is present, register an abort handler that removes this write from the pending queue and rejects its promise. The writer remains open; subsequent writes may still succeed.
7.2.3. Writer.writev()
writev(chunks, options) method writes multiple chunks as a single atomic batch. The operation is all-or-nothing: either all chunks are accepted or the entire write is rejected. It performs the same steps as write() except that batch is the result of converting each element of chunks to Uint8Array (encoding strings as UTF-8). The entire batch occupies a single slot for backpressure purposes.
7.2.4. Writer.writeSync() / Writer.writevSync()
writeSync(chunk) method attempts a synchronous write. It performs the following steps:
- If chunk is a
USVString, set chunk to the result of UTF-8 encoding chunk. - If the writer is closed or errored, return
false. - Let batch be « chunk ».
- If the slots buffer has space, enqueue batch, notify drain waiters, and return
true. -
The slots buffer is full. Proceed based on the backpressure policy:
- If
"drop-oldest": dequeue the oldest batch, enqueue batch, and returntrue. - If
"drop-newest": discard batch and returntrue. - If
"strict": returnfalse. - If
"block": enqueue batch and returnfalse. The data is accepted, but thefalsereturn signals backpressure.
- If
A return of false is the signal to use the try-fallback pattern and await write().
The writevSync(chunks) method performs the same steps but enqueues the full batch as a single slot. Like writev(), the operation is all-or-nothing: it returns true only if all chunks are accepted, or false if none are. A return of false is the signal to use the try-fallback pattern and await writev().
7.2.5. Writer.end() / Writer.endSync()
A writer in the closing state has had end() called but has not yet fully drained its buffered data to the consumer.
end(options) method signals end-of-stream and waits for buffered data to drain.
- If the writer is errored, return a promise rejected with the stored error.
- If the writer is closing or closed, return a promise resolved with the total number of bytes written (idempotent).
- Transition the writer to the closing state.
- Enqueue an end-of-stream sentinel into the buffer.
- Return a promise that resolves with the total number of bytes written when the consumer has consumed past the end sentinel. If
fail()is called while closing, the promise rejects with the fail reason instead.
Note: The byte count reflects total bytes passed through the writer, including bytes discarded under "drop-oldest" or "drop-newest" policies. It is a measure of throughput, not of bytes delivered to the consumer.
endSync() method attempts synchronous end-of-stream. Returns the total number of bytes written (≥ 0) on success, or −1 if the writer cannot end synchronously for any reason. A return of −1 is the signal to use the try-fallback pattern and await end().
If the writer is closing or closed, it returns the total number of bytes written (idempotent). If the writer can transition to the closed state synchronously (e.g., the buffer is empty and no async cleanup is required), it does so and returns the byte count. Otherwise it returns −1.
Note: No assumption should be made about why endSync() returns −1. The writer may be errored, the buffer may not be empty, the underlying resource may not support synchronous close, or the implementation may simply not support it. The caller should always fall back to end().
7.2.6. Writer.fail()
fail(reason) method puts the writer into a terminal error state synchronously.
- If the writer is errored or closed, it is a no-op.
- If the writer is closing (draining after
end()): transition to the errored state with reason, reject the pending end promise with reason, reject all pending read promises with reason, and call notify drain waiters with error reason. - If the writer is open: transition to the errored state with reason, reject all pending write promises with reason, reject all pending read promises with reason, and call notify drain waiters with error reason.
Note: fail() is unconditionally synchronous. If called while the writer is closing, it short-circuits the graceful drain initiated by end().
7.2.7. Disposal
Implementations of Writer should support both Symbol.asyncDispose and Symbol.dispose:
-
Symbol.asyncDispose: If the writer is open, callsfail()with no argument and returns a resolved promise. If the writer is closing (end was called, draining), returns the pending end promise, allowing the graceful drain to complete. If the writer is closed or errored, returns a resolved promise. -
Symbol.dispose: Callsfail()with no argument unconditionally (cannot wait for drain).
8. Stream factories
8.1. Stream.from()
from(input) method creates an AsyncIterable<Uint8Array[]> from various input types, normalizing to the batched chunks format.
- If input is
nullorundefined, throw aTypeError. - If input is a
USVString, return an async iterable yielding a single batch containing the UTF-8 encoded result. - If input is an
ArrayBuffer, return an async iterable yielding a single batch containing a newUint8Arraywrapping input (zero-copy). - If input is an
ArrayBufferView, return an async iterable yielding a single batch containing aUint8Arrayview over the same buffer region (zero-copy). - If input has a method keyed by
Symbol.for('Stream.toAsyncStreamable'), call that method and recursively normalize the result (awaiting if it returns a promise). This step takes precedence over thetoStreamableprotocol and the iteration protocols below. - If input has a method keyed by
Symbol.for('Stream.toStreamable'), call that method and recursively normalize the result. This step takes precedence over the iteration protocols below. - If input has a
Symbol.asyncIteratormethod, return a newAsyncIterable<Uint8Array[]>that pulls from the resulting async iterator and normalizes each yielded value to aUint8Array[]batch. Each yielded value is recursively normalized: strings are UTF-8 encoded;ArrayBufferandArrayBufferViewvalues are converted toUint8Array; arrays are flattened with each element normalized; nested iterables and async iterables are recursively consumed and flattened; objects with thetoStreamableortoAsyncStreamableprotocol are converted via that protocol. Values that are not convertible toUint8Arraycause aTypeError. Synchronous values encountered in sequence should be batched together into a singleUint8Array[]to maximize batching efficiency. - If input has a
Symbol.iteratormethod, treat the sync iterator as an async source and normalize as in the previous step (except that nested async iterables and thetoAsyncStreamableprotocol are not supported and cause aTypeErrorduring normalization). - Throw a
TypeError.
Note: WHATWG ReadableStream objects implement Symbol.asyncIterator and are therefore accepted by step 6. However, ReadableStream yields individual chunks (not Uint8Array[] batches), so each chunk is normalized and wrapped into a single-element batch. No special-casing of ReadableStream is required or defined by this specification.
8.2. Stream.fromSync()
fromSync(input) method creates an Iterable<Uint8Array[]> from synchronous input types. If input is null or undefined, it throws a TypeError. Otherwise it performs the same normalization as from() but only accepts synchronous inputs (USVString, ArrayBuffer, ArrayBufferView, objects with Symbol.for('Stream.toStreamable'), or objects with Symbol.iterator). The toAsyncStreamable protocol is ignored (not checked or rejected). Explicit async inputs are rejected with a TypeError: objects with Symbol.asyncIterator (and no synchronous interface), and Promise objects (even if the promise would resolve to a synchronous streamable type such as a string, Uint8Array, or iterable).
9. Pull pipelines
9.1. Stream.pull()
pull(source, ...args) method creates a pull pipeline, a lazy transform chain.
- Let normalized be the result of calling
from()with source. - Let transforms be an empty list and options be an empty
PullOptions. - Parse variadic transform arguments from the remaining args into transforms and options.
- Let signal be options["
signal"] if present; otherwiseundefined. - If signal is not
undefinedand aborted, return an async iterable that immediately throws the abort reason. - Let pipelineController be a new
AbortController. If signal is notundefined, set pipelineController’s signal to follow signal. - Let transformOptions be a
TransformCallbackOptionswith signal set to pipelineController’s signal. - Let composed be the result of compose transform pipeline with normalized, transforms, and transformOptions.
- Return composed. On return (consumer breaks), the pipeline controller is aborted.
9.2. Stream.pullSync()
pullSync(source, ...args) method creates a synchronous pull pipeline.
- Let normalized be the result of calling
fromSync()with source. - Let transforms be an empty list.
- For each arg in the remaining args: if arg is a transform argument, append it to transforms; otherwise throw a
TypeError. - Let composed be the result of compose sync transform pipeline with normalized and transforms.
- Return composed.
Note: Unlike pull(), pullSync does not accept an options dictionary (there is no cancellation signal for synchronous pipelines). All arguments after source must be sync transforms.
9.3. Transforms
Transforms come in two forms, distinguished by whether the value is a function or an object:
Stateless transforms are plain functions with the signature: (chunks, options) => result where chunks is Uint8Array[] | null and options is a TransformCallbackOptions. The function is called once per batch. When chunks is null, it is a flush signal indicating end-of-stream.
Stateful transforms are objects with a transform property whose value is a function with the signature: (source, options) => AsyncIterable where source is an AsyncIterable<Uint8Array[] | null> and options is a TransformCallbackOptions. The function receives the entire upstream source and returns an async iterable of output chunks.
"transform" property that is a function, it is a stateful transform; otherwise, it is not a transform.
Transform return values are flexible. A transform may return: null (no output), Uint8Array[] (a batch), a single Uint8Array, a USVString (UTF-8 encoded), an iterable or async iterable (flattened), or a promise (awaited, async only).
TransformCallbackOptions options:
- Let current be source.
- Let statelessRun be an empty list.
-
For each transform in transforms:
- If transform is a stateless transform (a function), append it to statelessRun.
-
If transform is a stateful transform (an object with a
"transform"method):- If statelessRun is not empty, set current to a new async iterable that pulls from current and applies each function in statelessRun in order per batch (calling each with the current batch and options, normalizing each result). Reset statelessRun to an empty list.
- Let wrappedSource be a new async iterable that pulls from current and appends a final
nullvalue (the flush signal) after the source is exhausted. - Set current to the result of calling transform’s
"transform"method with (wrappedSource, options). The returned async iterable becomes the new upstream for subsequent transforms.
- If statelessRun is not empty after processing all transforms, set current to a new async iterable that pulls from current and applies each remaining function in statelessRun in order per batch, followed by a final pass with
null(the flush signal) when the source is exhausted. - Return current.
The result is a single composed async iterable. Each stateful transform wraps everything upstream (including any preceding stateless transforms) and becomes the source for everything downstream. This means a chain like [stateless₁, stateful₂, stateless₃, stateful₄] produces:
stateful₄( stateless₃( stateful₂( stateless₁( source ) ) ) )
where each layer only executes when the outermost consumer pulls.
- Let current be source.
- Let statelessRun be an empty list.
-
For each transform in transforms:
- If transform is a stateless sync transform (a function), append it to statelessRun.
-
If transform is a stateful sync transform (an object with a
"transform"method):- If statelessRun is not empty, set current to a new iterable that pulls from current and applies each function in statelessRun in order per batch, normalizing each result. Reset statelessRun to an empty list.
- Let wrappedSource be a new iterable that pulls from current and appends a final
nullvalue (the flush signal) after the source is exhausted. - Set current to the result of calling transform’s
"transform"method with wrappedSource. The returned iterable becomes the new upstream.
- If statelessRun is not empty, set current to a new iterable that applies each remaining function in order per batch, followed by a
nullflush signal when exhausted. - Return current.
Note: Sync stateless transforms receive (chunks) with no options parameter (there is no TransformCallbackOptions for sync transforms, as there is no cancellation signal). Sync stateful transforms receive (source) only.
null, return null. If value is a Uint8Array, return « value ». If value is a USVString, return « UTF-8 encoded value ». If value is an ArrayBuffer or ArrayBufferView, convert to Uint8Array and return as a single-element batch. If value is iterable, flatten to Uint8Array chunks and return. Otherwise throw a TypeError.
10. Pipe operations
10.1. Stream.pipeTo()
pipeTo(source, ...args) method asynchronously consumes source and writes to a Writer destination, with optional transforms. Returns a promise resolving to the total bytes written.
- Let normalized be the result of calling
from()with source. - Let (transforms, writer, options) be the result of parse pipeTo arguments from the remaining args with requiredMethod set to
"write". - Let signal be options["
signal"] if present; otherwiseundefined. - Let preventClose be options["
preventClose"] and preventFail be options["preventFail"]. - Let pullOptions be a new
PullOptionswith signal set to signal if present. - Let pipeline be the result of
pull()with normalized, the extracted transforms, and pullOptions. - Let totalBytes be 0.
- Asynchronously iterate pipeline. For each batch yielded, for each chunk in batch, add chunk’s byte length to totalBytes. Then write the batch to writer: if writer has a
writevmethod, call writer.writev()with batch and «[ "signal" → signal ]»; otherwise call writer.write()for each chunk, passing signal in theWriteOptions. - On successful completion, if preventClose is
falseand writer has anendmethod, call writer.end()with «[ "signal" → signal ]». - On error e, if preventFail is
falseand writer has afailmethod, call writer.fail()with e. Re-throw e. - Return a promise resolved with totalBytes.
10.2. Stream.pipeToSync()
pipeToSync(source, ...args) method is the synchronous counterpart of pipeTo().
- Let normalized be the result of calling
fromSync()with source. - Let (transforms, writer, options) be the result of parse pipeTo arguments from the remaining args with requiredMethod set to
"writeSync". - Let preventClose be options["
preventClose"] and preventFail be options["preventFail"]. - Let pipeline be the result of compose sync transform pipeline with normalized and the extracted transforms.
- Let totalBytes be 0.
- Synchronously iterate pipeline. For each batch yielded, for each chunk in batch, add chunk’s byte length to totalBytes. Then write the batch to writer: if writer has a
writevSyncmethod, call writer.writevSync()with batch; otherwise call writer.writeSync()for each chunk. - On successful completion, if preventClose is
falseand writer has anendSyncmethod, call writer.endSync(). - On error e, if preventFail is
falseand writer has afailmethod, call writer.fail()with e. Re-throw e. - Return totalBytes.
11. Consumers
Consumer functions are terminal operations that collect an entire stream into memory. All consumers accept any input that from() can normalize. The optional limit parameter protects against unbounded memory growth; exceeding it throws a RangeError.
11.1. Stream.bytes() / Stream.bytesSync()
bytes(source, options) method collects all bytes from source into a single Uint8Array.
- Let normalized be the result of
from()with source. - Let signal be options["
signal"] if present. - Let limit be options["
limit"] if present. - If signal is present and aborted, return a promise rejected with its abort reason.
- Let chunks be an empty list and totalBytes be 0.
- Asynchronously iterate normalized. Before each iteration step, if signal is present and aborted, stop iteration and reject with signal’s abort reason. For each batch yielded, for each chunk, add its byte length to totalBytes. If limit is present and totalBytes exceeds limit, throw a
RangeError. Append the chunk to chunks. - If chunks is empty, return a zero-length
Uint8Array. - If chunks has exactly one element whose buffer is not shared, return that element.
- Otherwise, concatenate all chunks into a new
Uint8Arrayand return it.
The bytesSync(source, options) method performs the same algorithm synchronously using fromSync() for normalization.
11.2. Stream.text() / Stream.textSync()
text(source, options) method collects all bytes and decodes them as text.
- Let bytes be the result of
bytes()with source and aConsumeOptionscontaining options["signal"] and options["limit"]. - Let encoding be options["
encoding"] (default"utf-8"). - If encoding is not a supported encoding, throw a
RangeError. Conforming implementations must support at least"utf-8". - Let decoder be a new
TextDecoderwith encoding andfatalset totrue. - Return the result of calling decoder.
decode(bytes).
Note: The fatal: true default means invalid byte sequences throw a TypeError rather than being silently replaced.
The textSync(source, options) method performs the same algorithm synchronously.
11.3. Stream.arrayBuffer() / Stream.arrayBufferSync()
arrayBuffer(source, options) method collects all bytes into an ArrayBuffer.
- Let bytes be the result of
bytes()with source and options. - If bytes is a view over the entire buffer (offset 0, length equals buffer length), return the buffer directly.
- Otherwise return a copy of the relevant region as a new
ArrayBuffer.
The arrayBufferSync(source, options) method performs the same algorithm synchronously.
11.4. Stream.array() / Stream.arraySync()
array(source, options) method collects all chunks preserving chunk boundaries.
- Let normalized be the result of
from()with source. - Let signal be options["
signal"] if present. - Let limit be options["
limit"] if present. - If signal is present and aborted, return a promise rejected with its abort reason.
- Let result be an empty list and totalBytes be 0.
- Asynchronously iterate normalized. Before each iteration step, if signal is present and aborted, stop iteration and reject with signal’s abort reason. For each batch, for each chunk, add its byte length to totalBytes. If limit is present and totalBytes exceeds limit, throw a
RangeError. Append the chunk to result. - Return result as a JavaScript Array.
The arraySync(source, options) method performs the same algorithm synchronously.
12. Utilities
12.1. Stream.tap() / Stream.tapSync()
tap(callback) method creates a stateless transform that observes chunks without modifying them. The callback is called with each batch (or null for the flush signal) and the TransformCallbackOptions (including the pipeline’s signal). Its return value is ignored; the original batch is yielded unchanged. If the callback throws (or returns a rejected promise, for async callbacks), the error propagates to the consumer and the pipeline is torn down. The tap does not swallow errors. Returns a transform function suitable for use in pull() or push().
The tapSync(callback) method creates a synchronous tap transform.
Note: tap does not prevent the callback from mutating chunks in-place. The guarantee is only that the callback’s return value is ignored.
12.2. Stream.merge()
merge(...args) method interleaves multiple async sources into a single async iterable, yielding batches in the order they become available across all sources. All data from all sources is preserved; no batches are discarded.
- Let options be an empty
MergeOptions. - If the final element of args is a plain object that does not have
Symbol.asyncIteratororSymbol.iterator, treat it as the options dictionary: set options to that element and remove it from args. - Let sources be args. For each element in sources, normalize it via
from(). - Let signal be options["
signal"] if present. - Create an active iterator for each source by calling
Symbol.asyncIteratoron it. - Let ready be an empty queue of settled batches.
-
Return a new
AsyncIterable<Uint8Array[]>that, on each iteration step:- For each active iterator that does not already have a pending
.next()promise, call.next()once and store the resulting promise. This ensures at most one outstanding pull per source at any time. - If ready is not empty, dequeue the next batch from ready and yield it.
- Otherwise, wait for any one of the pending promises to settle.
- For each promise that settled with a non-done result, enqueue its value into ready.
- For each promise that settled with a done result, remove that iterator from the active set.
- If ready is not empty, dequeue the next batch from ready and yield it.
- When the active set is empty and ready is empty, the merged iterable completes.
- On cancellation (via signal or consumer break), call
.return()on all active iterators and discard ready.
- For each active iterator that does not already have a pending
Because multiple sources produce data independently but the consumer pulls through a single iterable, some internal queueing is unavoidable. When multiple sources settle between consumer pulls, their batches accumulate in ready and are drained in settlement order on subsequent pulls. However, each source has at most one pending .next() call at any time; the implementation does not proactively pull ahead from sources beyond what is needed to detect availability. A source’s next .next() is only called after its previous result has been enqueued into ready.
Note: This is not Promise.race semantics where "losing" values are discarded. Every batch from every source is yielded exactly once through the ready queue.
12.3. Stream.ondrain()
ondrain(drainable) method waits for a Writer’s backpressure to clear.
- If drainable does not have a method keyed by
Symbol.for('Stream.drainableProtocol'), returnnull. - Let result be the result of calling that method on drainable.
- Return result, which is:
nullif drain is not applicable; a promise resolving totruewhen backpressure clears; a promise resolving tofalseif the writer closes while waiting; or a promise that rejects if the writer errors.
Note: await null evaluates to null, which is falsy. The pattern const canWrite = await Stream.ondrain(writer); if (!canWrite) return; works correctly even when the protocol is not supported.
13. Multi-consumer streams
13.1. Broadcast
13.1.1. Stream.broadcast()
broadcast(options) method creates a broadcast channel, a push-model multi-consumer pattern.
- Let highWaterMark be the result of clamping options["
highWaterMark"] to the range [1, implementation-defined maximum]. - Let backpressure be options["
backpressure"]. - Create a shared circular buffer with capacity highWaterMark.
- Let writer be a new
Writerbacked by the shared buffer with backpressure policy. - Let broadcast be a new
Broadcastobject backed by the shared buffer. - Return «[ "writer" → writer, "broadcast" → broadcast ]».
13.1.2. Broadcast backpressure
Backpressure in a broadcast channel is governed entirely by the slowest consumer. Each consumer maintains a cursor into the shared circular buffer. The desiredSize of the broadcast’s Writer is determined by the consumer whose cursor is furthest behind: the consumer with the most unconsumed data.
Concretely:
-
The shared buffer cannot overwrite entries that the slowest consumer has not yet read. The effective buffer capacity, from the writer’s perspective, is the distance between the write position and the slowest consumer’s cursor.
-
When the slowest consumer falls behind and the buffer fills, the configured backpressure policy applies to the writer:
"strict"rejects excess writes,"block"causes writes to wait,"drop-oldest"advances the slowest cursor (discarding unread data for that consumer), and"drop-newest"discards incoming writes. -
Faster consumers that have already read all available data will wait for the next write. They are never the bottleneck.
-
When a slow consumer detaches (its iterator returns or throws), backpressure is immediately recalculated based on the remaining consumers. This may unblock pending writes if the detached consumer was the slowest.
-
If there are no consumers, writes succeed up to the buffer capacity as governed by the backpressure policy. Data remains in the buffer and is available to consumers that attach before it is overwritten. Late-joining consumers begin reading from the oldest entry still in the buffer at the time they call
push().
Note: Because a single slow consumer can stall all producers, callers should consider using "drop-oldest" or "drop-newest" policies when consumers may have widely varying consumption rates and the data is tolerant of loss.
13.1.3. Broadcast.push()
push(...args) method creates a new consumer of the broadcast.
- Let transforms be an empty list and options be an empty
PullOptions. - Parse variadic transform arguments from args into transforms and options.
- Create a new cursor at the current write position in the shared buffer.
- Let pipelineController be a new
AbortController. If options["signal"] is present, set pipelineController’s signal to follow it. - Let cursorIterable be an async iterable that advances the cursor on each step, waiting for new data if needed.
- Return the result of compose transform pipeline with cursorIterable, transforms, and a
TransformCallbackOptionswith signal set to pipelineController’s signal. On break/error, abort pipelineController and detach the cursor.
13.1.4. Broadcast.cancel()
The cancel(reason) method cancels all consumers. If reason is provided, each consumer sees it as an error; otherwise they see clean completion.
Implementations should support Symbol.dispose by calling cancel() with no argument.
13.2. Share
13.2.1. Stream.share()
share(source, options) method creates a shared source, a pull-model multi-consumer pattern.
- Let normalized be the result of
from()with source. - Let highWaterMark be the result of clamping options["
highWaterMark"] to the range [1, implementation-defined maximum]. - Let backpressure be options["
backpressure"]. - Create a shared buffer with capacity highWaterMark. The source iterator is created lazily on first pull.
- Return a new
Shareobject backed by the shared buffer, backpressure policy, and normalized as the upstream source.
13.2.2. Share buffering and backpressure
A shared source uses a shared buffer and per-consumer cursors in the same manner as a broadcast channel, with one key difference: data enters the buffer by being pulled from the upstream source rather than pushed by a writer.
-
Each consumer maintains a cursor into the shared buffer. When a consumer pulls and its cursor points to data already in the buffer, it reads directly without pulling from the upstream source.
-
When a consumer pulls and no data is available at its cursor position, the shared source pulls the next batch from the upstream source iterator, stores it in the shared buffer, and advances the cursor.
-
Only one pull from the upstream source occurs at a time, regardless of how many consumers request data concurrently. If multiple consumers are waiting for the next batch, a single upstream pull satisfies all of them.
-
The shared buffer is trimmed as all consumers advance past buffered entries. An entry is eligible for removal only when every active consumer’s cursor has moved beyond it.
-
When the slowest consumer falls behind and the buffer reaches capacity, the configured backpressure policy determines what happens when a faster consumer attempts to pull new data from upstream. With
"strict", the pull is rejected. The faster consumer receives an error indicating the buffer is full, but the consumer’s iterator remains valid and subsequent pulls may succeed once the slow consumer advances. With"block", the pull is blocked until the slowest consumer advances and frees buffer space. With"drop-oldest", the oldest entry is discarded (advancing the slowest cursor) to make room for the new upstream pull. With"drop-newest", the upstream pull result is discarded.
Note: A rejected pull under "strict" does not terminate the consumer’s iterator. Within the constraints of the async iterator protocol, the .next() call returns a rejected promise, but the iterator is not put into a "done" state. The consumer may retry by calling .next() again. However, if the consumer is using for await...of, the rejected promise will cause the loop to exit and the iterator to be closed via .return(). This is standard async iterator protocol behavior, not specific to this API.
-
When a slow consumer detaches, the buffer may be trimmed and blocked consumers may be unblocked, just as with broadcast.
-
If there are no consumers, no upstream pulls occur (the source is lazy).
13.2.3. Share.pull()
pull(...args) method creates a new consumer.
- Let transforms and options be extracted from args via parse variadic transform arguments.
- Create a new cursor at the current buffer position. If the source iterator has not been created yet, create it now (lazy initialization).
- Let pipelineController be a new
AbortController. If options["signal"] is present, set pipelineController’s signal to follow it. - Let cursorIterable be an async iterable that checks the cursor position, reads from the buffer if available or pulls from the source iterator if not, and trims the buffer when all consumers have advanced.
- Return the result of compose transform pipeline with cursorIterable, transforms, and a
TransformCallbackOptionswith signal set to pipelineController’s signal. On break/error, abort pipelineController and detach the cursor.
13.2.4. Share.cancel()
The cancel(reason) method cancels all consumers and closes the source iterator. Implementations should support Symbol.dispose by calling cancel() with no argument.
13.2.5. Stream.shareSync()
shareSync(source, options) method creates a synchronous shared source.
- Let normalized be the result of
fromSync()with source. - Let highWaterMark be the result of clamping options["
highWaterMark"] to the range [1, implementation-defined maximum]. - Let backpressure be options["
backpressure"]. - Create a shared buffer with capacity highWaterMark. The source iterator is created lazily on first pull.
- Return a new
SyncShareobject backed by the shared buffer, backpressure policy, and normalized as the upstream source.
14. Duplex channels
14.1. Stream.duplex()
duplex(options) method creates a pair of connected DuplexChannel objects for bidirectional communication, similar to Unix socketpair().
- Let sharedHWM be options["
highWaterMark"] and sharedBP be options["backpressure"]. - Let aOpts be options["
a"] merged with shared defaults, and bOpts be options["b"] merged with shared defaults. - Create push stream A→B with aOpts settings, yielding writerA and readableAB.
- Create push stream B→A with bOpts settings, yielding writerB and readableBA.
- Let channelA be a new
DuplexChannelwith writer writerA (writes go to B), readable readableBA (reads come from B). Callingclose()on channelA ends writerA (signaling end-of-stream to B’s readable) and stops iteration of readableBA (by calling.return()on its iterator if active). The close is idempotent. - Let channelB be a new
DuplexChannelwith writer writerB (writes go to A), readable readableAB (reads come from A). Callingclose()on channelB ends writerB and stops iteration of readableAB. The close is idempotent. - If options["
signal"] is present, register an abort handler that callsfail()on both writers with the signal’s abort reason. - Return « channelA, channelB ».
Example:
const [ client, server] = Stream. duplex(); { await using conn= client; await conn. writer. write( 'Hello' ); } // conn.close() called automatically
15. Protocol symbols
Protocol symbols allow user-defined objects to participate in the streaming API. All symbols are created via Symbol.for(), allowing third-party code to implement protocols without importing these symbols directly.
15.1. ToStreamable protocols
Symbol.for('Stream.toStreamable')- A method returning a synchronous streamable representation. The return value may be any type accepted by
from()orfromSync(): aUSVString,ArrayBuffer,ArrayBufferView, or an object withSymbol.iterator. Used by both sync and async paths. Symbol.for('Stream.toAsyncStreamable')- A method returning (or returning a promise resolving to) an async streamable representation. Used by
from()only (not by sync functions). When both protocols are present, async paths prefertoAsyncStreamable.
15.2. Multi-consumer protocols
Symbol.for('Stream.broadcastProtocol')- A method accepting an optional
BroadcastOptionsand returning aBroadcastobject. Enables custom types to provide optimized broadcast implementations. Symbol.for('Stream.shareProtocol')- A method accepting an optional
ShareOptionsand returning aShareobject. Symbol.for('Stream.shareSyncProtocol')- A method accepting an optional
ShareSyncOptionsand returning aSyncShareobject.
15.3. Drainable protocol
Symbol.for('Stream.drainableProtocol')-
A method returning
nullor a promise resolving to a boolean. Used byondrain()to wait for backpressure to clear. Return values:nullif drain is not applicable; promise resolving totruewhen ready; promise resolving tofalseif the writer closed; promise rejecting if the writer errored.Writerobjects returned bypush()andbroadcast()automatically implement this protocol.
16. Abstract operations
16.1. Parse variadic transform arguments
- Let transforms be an empty list and options be
undefined. - For each arg in args: if arg is a transform argument, append it to transforms; otherwise, if options is
undefined, set options to arg; otherwise throw aTypeError. - If options is
undefined, set options to an empty dictionary. - Return (transforms, options).
"write" for Writer or "writeSync" for SyncWriter), returning a list of transforms, a writer, and an options dictionary:
- If args is empty, throw a
TypeError(a writer is required). - Let last be the final element of args.
- Let options be
undefined. - If last is not a transform argument and does not have a requiredMethod method, treat last as the options dictionary: set options to last and remove it from args.
- If args is now empty, throw a
TypeError(a writer is required). - Let writer be the final element of args and remove it from args.
- If writer does not have a requiredMethod method, throw a
TypeError. - Let transforms be an empty list.
- For each remaining arg in args: if arg is a transform argument, append it to transforms; otherwise throw a
TypeError. - If options is
undefined, set options to an empty dictionary. - Return (transforms, writer, options).
Note: The convention across all variadic methods is that the options dictionary, when present, is always the final argument. Transforms precede it in definition order. For pipeTo() and pipeToSync(), the argument order is (transforms..., writer, options?): the writer is either the final argument (when no options are provided) or second-to-last (when options are provided), with any number of transforms preceding it.
16.2. Notify drain waiters
- For each pending drain promise associated with this writer: if reason was given, reject the promise with reason; otherwise, if the writer is closed, resolve the promise with
false; otherwise, resolve the promise withtrue. - Clear the list of pending drain promises.
16.3. UTF-8 encode
TextEncoder as defined in [ENCODING]; return the result of calling encoder.encode(s).
Acknowledgements
The design of this API was informed by practical experience with the WHATWG Streams Standard [STREAMS], Node.js streams, and streaming patterns in JavaScript runtimes including Cloudflare Workers, Node.js, Deno, Bun, and Web Browsers.
Index
Terms defined by this specification
- a, in § 6.3
- arrayBuffer(source), in § 11.3
- arrayBuffer(source, options), in § 11.3
- arrayBufferSync(source), in § 11.3
- arrayBufferSync(source, options), in § 11.3
- array(source), in § 11.4
- array(source, options), in § 11.4
- arraySync(source), in § 11.4
- arraySync(source, options), in § 11.4
- b, in § 6.3
-
backpressure
- dict-member for BroadcastOptions, in § 6.3
- dict-member for DuplexDirectionOptions, in § 6.3
- dict-member for DuplexOptions, in § 6.3
- dict-member for PushStreamOptions, in § 6.3
- dict-member for ShareOptions, in § 6.3
- dict-member for ShareSyncOptions, in § 6.3
- backpressure policy, in § 4.3
- BackpressurePolicy, in § 6.1
- batched chunks, in § 4.2
- "block", in § 6.1
- Broadcast, in § 6.6
- broadcast, in § 6.3
- broadcast(), in § 13.1.1
- broadcast channel, in § 4.6
- broadcast(options), in § 13.1.1
- BroadcastOptions, in § 6.3
- BroadcastResult, in § 6.3
-
bufferSize
- attribute for Broadcast, in § 6.6
- attribute for Share, in § 6.7
- attribute for SyncShare, in § 6.8
- ByteReadableStream, in § 6.2
- bytes(source), in § 11.1
- bytes(source, options), in § 11.1
- bytesSync(source), in § 11.1
- bytesSync(source, options), in § 11.1
-
cancel()
- method for Broadcast, in § 13.1.4
- method for Share, in § 13.2.4
- method for SyncShare, in § 6.8
-
cancel(reason)
- method for Broadcast, in § 13.1.4
- method for Share, in § 13.2.4
- method for SyncShare, in § 6.8
- close(), in § 6.9
- closing, in § 7.2.5
- compose sync transform pipeline, in § 9.3
- compose transform pipeline, in § 9.3
- ConsumeOptions, in § 6.3
-
consumerCount
- attribute for Broadcast, in § 6.6
- attribute for Share, in § 6.7
- attribute for SyncShare, in § 6.8
- ConsumeSyncOptions, in § 6.3
-
desiredSize
- attribute for SyncWriter, in § 6.5
- attribute for Writer, in § 7.2.1
- "drop-newest", in § 6.1
- "drop-oldest", in § 6.1
- duplex(), in § 14.1
- DuplexChannel, in § 6.9
- DuplexDirectionOptions, in § 6.3
- duplex(options), in § 14.1
- DuplexOptions, in § 6.3
-
encoding
- dict-member for TextConsumeOptions, in § 6.3
- dict-member for TextConsumeSyncOptions, in § 6.3
- end(), in § 7.2.5
- end(options), in § 7.2.5
-
endSync()
- method for SyncWriter, in § 6.5
- method for Writer, in § 7.2.5
-
fail()
- method for SyncWriter, in § 6.5
- method for Writer, in § 7.2.6
-
fail(reason)
- method for SyncWriter, in § 6.5
- method for Writer, in § 7.2.6
- flush signal, in § 9.3
- from(input), in § 8.1
- fromSync(input), in § 8.2
-
highWaterMark
- dict-member for BroadcastOptions, in § 6.3
- dict-member for DuplexDirectionOptions, in § 6.3
- dict-member for DuplexOptions, in § 6.3
- dict-member for PushStreamOptions, in § 6.3
- dict-member for ShareOptions, in § 6.3
- dict-member for ShareSyncOptions, in § 6.3
- Iterable Streams API, in § Unnumbered section
-
limit
- dict-member for ConsumeOptions, in § 6.3
- dict-member for ConsumeSyncOptions, in § 6.3
- merge(), in § 12.2
- merge(...args), in § 12.2
- MergeOptions, in § 6.3
- normalize transform output, in § 9.3
- notify drain waiters, in § 16.2
- ondrain(drainable), in § 12.3
- parse pipeTo arguments, in § 16.1
- parse variadic transform arguments, in § 16.1
- PipeToOptions, in § 6.3
- pipeTo(source), in § 10.1
- pipeTo(source, ...args), in § 10.1
- PipeToSyncOptions, in § 6.3
- pipeToSync(source), in § 10.2
- pipeToSync(source, ...args), in § 10.2
-
preventClose
- dict-member for PipeToOptions, in § 6.3
- dict-member for PipeToSyncOptions, in § 6.3
-
preventFail
- dict-member for PipeToOptions, in § 6.3
- dict-member for PipeToSyncOptions, in § 6.3
-
pull()
- method for Share, in § 13.2.3
- method for SyncShare, in § 6.8
-
pull(...args)
- method for Share, in § 13.2.3
- method for SyncShare, in § 6.8
- PullOptions, in § 6.3
- pull pipeline, in § 4.5
- pull(source), in § 9.1
- pull(source, ...args), in § 9.1
- pullSync(source), in § 9.2
- pullSync(source, ...args), in § 9.2
-
push()
- method for Broadcast, in § 13.1.3
- method for Stream, in § 7.1
-
push(...args)
- method for Broadcast, in § 13.1.3
- method for Stream, in § 7.1
- push stream, in § 4.4
- PushStreamOptions, in § 6.3
- PushStreamResult, in § 6.3
-
readable
- attribute for DuplexChannel, in § 6.9
- dict-member for PushStreamResult, in § 6.3
- Share, in § 6.7
- shared source, in § 4.7
- ShareOptions, in § 6.3
- share(source), in § 13.2.1
- share(source, options), in § 13.2.1
- ShareSyncOptions, in § 6.3
- shareSync(source), in § 13.2.5
- shareSync(source, options), in § 13.2.5
-
signal
- dict-member for BroadcastOptions, in § 6.3
- dict-member for ConsumeOptions, in § 6.3
- dict-member for DuplexOptions, in § 6.3
- dict-member for MergeOptions, in § 6.3
- dict-member for PipeToOptions, in § 6.3
- dict-member for PullOptions, in § 6.3
- dict-member for PushStreamOptions, in § 6.3
- dict-member for ShareOptions, in § 6.3
- dict-member for TransformCallbackOptions, in § 6.3
- dict-member for WriteOptions, in § 6.3
- StatelessTransformFn, in § 6.3
- Stream, in § 6.10
- "strict", in § 6.1
- SyncByteReadableStream, in § 6.2
- SyncShare, in § 6.8
- SyncStatelessTransformFn, in § 6.3
- SyncWriter, in § 6.5
- tap(callback), in § 12.1
- tapSync(callback), in § 12.1
- TextConsumeOptions, in § 6.3
- TextConsumeSyncOptions, in § 6.3
- text(source), in § 11.2
- text(source, options), in § 11.2
- textSync(source), in § 11.2
- textSync(source, options), in § 11.2
- transform argument, in § 9.3
- TransformCallbackOptions, in § 6.3
- try-fallback pattern, in § 5.5
- UTF-8 encode, in § 16.3
- web-interoperable runtime, in § 4.1
- write(chunk), in § 7.2.2
- write(chunk, options), in § 7.2.2
- WriteOptions, in § 6.3
- Writer, in § 6.4
-
writer
- attribute for DuplexChannel, in § 6.9
- dict-member for BroadcastResult, in § 6.3
- dict-member for PushStreamResult, in § 6.3
-
writeSync(chunk)
- method for SyncWriter, in § 6.5
- method for Writer, in § 7.2.4
- writev(chunks), in § 7.2.3
- writev(chunks, options), in § 7.2.3
-
writevSync(chunks)
- method for SyncWriter, in § 6.5
- method for Writer, in § 7.2.4
Terms defined by reference
-
[DOM] defines the following terms:
- AbortController
- AbortSignal
- aborted
-
[ENCODING] defines the following terms:
- TextDecoder
- TextEncoder
-
[WEBIDL] defines the following terms:
- ArrayBuffer
- ArrayBufferView
- DOMString
- EnforceRange
- Promise
- RangeError
- TypeError
- USVString
- Uint8Array
- a promise rejected with
- a promise resolved with
- any
- boolean
- long
- long long
- object
- sequence
- undefined
- unsigned long
- unsigned long long
IDL Index
enum {BackpressurePolicy ,"strict" ,"block" ,"drop-oldest" };"drop-newest" typedef object ;ByteReadableStream typedef object ;SyncByteReadableStream dictionary {WriteOptions AbortSignal ; };signal dictionary {PushStreamOptions unsigned long = 4;highWaterMark BackpressurePolicy = "strict";backpressure AbortSignal ; };signal dictionary {PullOptions AbortSignal ; };signal dictionary {PipeToOptions AbortSignal ;signal boolean =preventClose false ;boolean =preventFail false ; };dictionary {PipeToSyncOptions boolean =preventClose false ;boolean =preventFail false ; };dictionary {ConsumeOptions AbortSignal ; [signal EnforceRange ]unsigned long long ; };limit dictionary { [ConsumeSyncOptions EnforceRange ]unsigned long long ; };limit dictionary :TextConsumeOptions ConsumeOptions {DOMString = "utf-8"; };encoding dictionary :TextConsumeSyncOptions ConsumeSyncOptions {DOMString = "utf-8"; };encoding dictionary {MergeOptions AbortSignal ; };signal dictionary {BroadcastOptions unsigned long = 16;highWaterMark BackpressurePolicy = "strict";backpressure AbortSignal ; };signal dictionary {ShareOptions unsigned long = 16;highWaterMark BackpressurePolicy = "strict";backpressure AbortSignal ; };signal dictionary {ShareSyncOptions unsigned long = 16;highWaterMark BackpressurePolicy = "strict"; };backpressure dictionary {DuplexDirectionOptions unsigned long ;highWaterMark BackpressurePolicy ; };backpressure dictionary {DuplexOptions unsigned long = 4;highWaterMark BackpressurePolicy = "strict";backpressure DuplexDirectionOptions ;a DuplexDirectionOptions ;b AbortSignal ; };signal dictionary {TransformCallbackOptions required AbortSignal ; };signal callback =StatelessTransformFn any (sequence <Uint8Array >?,chunks TransformCallbackOptions );options callback =SyncStatelessTransformFn any (sequence <Uint8Array >?);chunks dictionary {PushStreamResult required Writer ;writer required ByteReadableStream ; };readable dictionary {BroadcastResult required Writer ;writer required Broadcast ; };broadcast interface {Writer readonly attribute long ?desiredSize ;Promise <undefined >write ((Uint8Array or USVString ),chunk optional WriteOptions = {});options Promise <undefined >writev (sequence <(Uint8Array or USVString )>,chunks optional WriteOptions = {});options boolean writeSync ((Uint8Array or USVString ));chunk boolean writevSync (sequence <(Uint8Array or USVString )>);chunks Promise <unsigned long long >end (optional WriteOptions = {});options long long endSync ();undefined fail (optional any ); };reason interface {SyncWriter readonly attribute long ?;desiredSize boolean ((writeSync Uint8Array or USVString ));chunk boolean (writevSync sequence <(Uint8Array or USVString )>);chunks unsigned long long ();endSync undefined (fail optional any ); };reason interface {Broadcast readonly attribute unsigned long ;consumerCount readonly attribute unsigned long ;bufferSize ByteReadableStream push (any ...);args undefined cancel (optional any ); };reason interface {Share readonly attribute unsigned long ;consumerCount readonly attribute unsigned long ;bufferSize ByteReadableStream pull (any ...);args undefined cancel (optional any ); };reason interface {SyncShare readonly attribute unsigned long ;consumerCount readonly attribute unsigned long ;bufferSize SyncByteReadableStream (pull any ...);args undefined (cancel optional any ); };reason interface {DuplexChannel readonly attribute Writer ;writer readonly attribute ByteReadableStream ;readable Promise <undefined >(); }; [Exposed=*]close namespace { /* Push stream creation */Stream PushStreamResult push (any ...); /* Stream factories */args ByteReadableStream from (any );input SyncByteReadableStream fromSync (any ); /* Pull pipelines */input ByteReadableStream pull (any ,source any ...);args SyncByteReadableStream pullSync (any ,source any ...); /* Pipe operations */args Promise <unsigned long long >pipeTo (any ,source any ...);args unsigned long long pipeToSync (any ,source any ...); /* Consumers */args Promise <Uint8Array >bytes (any ,source optional ConsumeOptions = {});options Uint8Array bytesSync (any ,source optional ConsumeSyncOptions = {});options Promise <USVString >text (any ,source optional TextConsumeOptions = {});options USVString textSync (any ,source optional TextConsumeSyncOptions = {});options Promise <ArrayBuffer >arrayBuffer (any ,source optional ConsumeOptions = {});options ArrayBuffer arrayBufferSync (any ,source optional ConsumeSyncOptions = {});options Promise <sequence <Uint8Array >>array (any ,source optional ConsumeOptions = {});options sequence <Uint8Array >arraySync (any ,source optional ConsumeSyncOptions = {}); /* Utilities */options StatelessTransformFn tap (any );callback SyncStatelessTransformFn tapSync (any );callback ByteReadableStream merge (any ...);args Promise <boolean >?ondrain (any ); /* Multi-consumer */drainable BroadcastResult broadcast (optional BroadcastOptions = {});options Share share (any ,source optional ShareOptions = {});options SyncShare shareSync (any ,source optional ShareSyncOptions = {}); /* Duplex */options sequence <DuplexChannel >duplex (optional DuplexOptions = {}); };options
Copyright & Software License
Ecma International
Rue du Rhone 114
CH-1204 Geneva
Tel: +41 22 849 6000
Fax: +41 22 849 6001
Web: https://ecma-international.org/
Copyright Notice
© 2026 Ecma International
This draft document may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published, and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this section are included on all such copies and derivative works. However, this document itself may not be modified in any way, including by removing the copyright notice or references to Ecma International, except as needed for the purpose of developing any document or deliverable produced by Ecma International.
This disclaimer is valid only prior to final version of this document. After approval all rights on the standard are reserved by Ecma International.
The limited permissions are granted through the standardization phase and will not be revoked by Ecma International or its successors or assigns during this time.
This document and the information contained herein is provided on an "AS IS" basis and ECMA INTERNATIONAL DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY OWNERSHIP RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Software License
All Software contained in this document ("Software") is protected by copyright and is being made available under the "BSD License", included below. This Software may be subject to third party rights (rights from parties other than Ecma International), including patent rights, and no licenses under such third party rights are granted under this license even if the third party concerned is a member of Ecma International. SEE THE ECMA CODE OF CONDUCT IN PATENT MATTERS AVAILABLE AT https://ecma-international.org/memento/codeofconduct.htm FOR INFORMATION REGARDING THE LICENSING OF PATENT CLAIMS THAT ARE REQUIRED TO IMPLEMENT ECMA INTERNATIONAL STANDARDS.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- Neither the name of the authors nor Ecma International may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE ECMA INTERNATIONAL "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL ECMA INTERNATIONAL BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.