Compress a chunk of data with BrotliCompress.
Node.js module
zlib
The 'node:zlib' module provides compression and decompression APIs for the zlib library, including Gzip, Deflate, Brotli, and raw deflate/inflate streams.
It offers both streaming and callback-based methods, with configurable compression levels and flush modes, making it suitable for data compression, HTTP content encoding, and file archiving.
Works in Bun
Fully implemented. 98% of Node.js's test suite passes.
namespace constants
- ): NonSharedBuffer;
- ): NonSharedBuffer;
Decompress a chunk of data with
BrotliDecompress. - data: string | ArrayBufferView<ArrayBufferLike>,value?: number): number;
Computes a 32-bit Cyclic Redundancy Check checksum of
data. Ifvalueis specified, it is used as the starting value of the checksum, otherwise, 0 is used as the starting value.@param dataWhen
datais a string, it will be encoded as UTF-8 before being used for computation.@param valueAn optional starting value. It must be a 32-bit unsigned integer.
@returnsA 32-bit unsigned integer containing the checksum.
Creates and returns a new
BrotliCompressobject.Creates and returns a new
BrotliDecompressobject.Creates and returns a new
DeflateRawobject.An upgrade of zlib from 1.2.8 to 1.2.11 changed behavior when
windowBitsis set to 8 for raw deflate streams. zlib would automatically setwindowBitsto 9 if was initially set to 8. Newer versions of zlib will throw an exception, so Node.js restored the original behavior of upgrading a value of 8 to 9, since passingwindowBits = 9to zlib actually results in a compressed stream that effectively uses an 8-bit window only.Creates and returns a new
Gzipobject. Seeexample.Creates and returns a new
InflateRawobject.Creates and returns a new
ZstdCompressobject.Creates and returns a new
ZstdDecompressobject.- ): NonSharedBuffer;
Compress a chunk of data with
DeflateRaw. - ): NonSharedBuffer;
Compress a chunk of data with
Deflate. - ): NonSharedBuffer;
Decompress a chunk of data with
Gunzip. - ): NonSharedBuffer;
Compress a chunk of data with
Gzip. - ): NonSharedBuffer;
Decompress a chunk of data with
InflateRaw. - ): NonSharedBuffer;
Decompress a chunk of data with
Inflate. - ): NonSharedBuffer;
Decompress a chunk of data with
Unzip. - ): NonSharedBuffer;
Compress a chunk of data with
ZstdCompress. - ): NonSharedBuffer;
Decompress a chunk of data with
ZstdDecompress.
Type definitions
namespace brotliCompress
namespace brotliDecompress
namespace deflate
namespace deflateRaw
namespace gunzip
namespace gzip
namespace inflate
namespace inflateRaw
namespace unzip
namespace zstdCompress
namespace zstdDecompress
interface BrotliCompress
Transform streams are
Duplexstreams where the output is in some way related to the input. Like allDuplexstreams,Transformstreams implement both theReadableandWritableinterfaces.Examples of
Transformstreams include:zlib streamscrypto streams
- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface BrotliDecompress
Transform streams are
Duplexstreams where the output is in some way related to the input. Like allDuplexstreams,Transformstreams implement both theReadableandWritableinterfaces.Examples of
Transformstreams include:zlib streamscrypto streams
- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface BrotliOptions
interface Deflate
Transform streams are
Duplexstreams where the output is in some way related to the input. Like allDuplexstreams,Transformstreams implement both theReadableandWritableinterfaces.Examples of
Transformstreams include:zlib streamscrypto streams
- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface DeflateRaw
Transform streams are
Duplexstreams where the output is in some way related to the input. Like allDuplexstreams,Transformstreams implement both theReadableandWritableinterfaces.Examples of
Transformstreams include:zlib streamscrypto streams
- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface Gunzip
Transform streams are
Duplexstreams where the output is in some way related to the input. Like allDuplexstreams,Transformstreams implement both theReadableandWritableinterfaces.Examples of
Transformstreams include:zlib streamscrypto streams
- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface Gzip
Transform streams are
Duplexstreams where the output is in some way related to the input. Like allDuplexstreams,Transformstreams implement both theReadableandWritableinterfaces.Examples of
Transformstreams include:zlib streamscrypto streams
- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface Inflate
Transform streams are
Duplexstreams where the output is in some way related to the input. Like allDuplexstreams,Transformstreams implement both theReadableandWritableinterfaces.Examples of
Transformstreams include:zlib streamscrypto streams
- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface InflateRaw
Transform streams are
Duplexstreams where the output is in some way related to the input. Like allDuplexstreams,Transformstreams implement both theReadableandWritableinterfaces.Examples of
Transformstreams include:zlib streamscrypto streams
- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface Unzip
Transform streams are
Duplexstreams where the output is in some way related to the input. Like allDuplexstreams,Transformstreams implement both theReadableandWritableinterfaces.Examples of
Transformstreams include:zlib streamscrypto streams
- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface ZlibOptions
interface ZlibParams
interface ZstdCompress
- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface ZstdDecompress
- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface ZstdOptions
- dictionary?: ArrayBufferView<ArrayBufferLike>
Optional dictionary used to improve compression efficiency when compressing or decompressing data that shares common patterns with the dictionary.
- params?: {}__index[key: number]: number | boolean;
Key-value object containing indexed Zstd parameters.
- type CompressCallback = (error: Error | null, result: NonSharedBuffer) => void
- type InputType = string | ArrayBuffer | NodeJS.ArrayBufferView