Duplex streams are streams that implement both the Readable
and Writable
interfaces.
Examples of Duplex
streams include:
TCP sockets
zlib streams
crypto streams
interface
Duplex streams are streams that implement both the Readable
and Writable
interfaces.
Examples of Duplex
streams include:
TCP sockets
zlib streams
crypto streams
Set to true
if the Http2Stream
instance was aborted abnormally. When set, the 'aborted'
event will have been emitted.
If false
then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen
constructor option, which defaults to true
.
This can be changed manually to change the half-open behavior of an existing Duplex
stream instance, but must be changed before the 'end'
event is emitted.
This property shows the number of characters currently buffered to be written. See net.Socket.bufferSize
for details.
Set to true
if the Http2Stream
instance has been destroyed and is no longer usable.
Set to true
if the END_STREAM
flag was set in the request or response HEADERS frame received, indicating that no additional data should be received and the readable side of the Http2Stream
will be closed.
The numeric stream identifier of this Http2Stream
instance. Set to undefined
if the stream identifier has not yet been assigned.
Set to true
if the Http2Stream
instance has not yet been assigned a numeric stream identifier.
Read-only property mapped to the SETTINGS_ENABLE_PUSH
flag of the remote client's most recent SETTINGS
frame. Will be true
if the remote peer accepts push streams, false
otherwise. Settings are the same for every Http2Stream
in the same Http2Session
.
Is true
if it is safe to call read, which means the stream has not been destroyed or emitted 'error'
or 'end'
.
Returns whether the stream was destroyed or errored before emitting 'end'
.
Getter for the property encoding
of a given Readable
stream. The encoding
property can be set using the setEncoding method.
This property reflects the current state of a Readable
stream as described in the Three states section.
Returns the value of highWaterMark
passed when creating this Readable
.
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark
.
Set to the RST_STREAM
error code
reported when the Http2Stream
is destroyed after either receiving an RST_STREAM
frame from the connected peer, calling http2stream.close()
, or http2stream.destroy()
. Will be undefined
if the Http2Stream
has not been closed.
An object containing the outbound headers sent for this Http2Stream
.
An array of objects containing the outbound informational (additional) headers sent for this Http2Stream
.
An object containing the outbound trailers sent for this HttpStream
.
A reference to the Http2Session
instance that owns this Http2Stream
. The value will be undefined
after the Http2Stream
instance is destroyed.
Provides miscellaneous information about the current state of the Http2Stream
.
A current state of this Http2Stream
.
Is true
if it is safe to call writable.write()
, which means the stream has not been destroyed, errored, or ended.
Number of times writable.uncork()
needs to be called in order to fully uncork the stream.
Is true
after writable.end()
has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished
instead.
Return the value of highWaterMark
passed when creating this Writable
.
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark
.
Is true
if the stream's buffer has been full and stream will emit 'drain'
.
Calls readable.destroy()
with an AbortError
and returns a promise that fulfills when the stream is finished.
Sends an additional informational HEADERS
frame to the connected HTTP/2 peer.
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
Event emitter The defined events on documents including:
This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]
. The first index value is 0
and it increases by 1 for each chunk produced.
a stream of indexed pairs.
Closes the Http2Stream
instance by sending an RST_STREAM
frame to the connected HTTP/2 peer.
Unsigned 32-bit integer identifying the error code.
An optional function registered to listen for the 'close'
event.
The writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.
The primary intent of writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork()
buffers all the chunks until writable.uncork()
is called, which will pass them all to writable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork()
without implementing writable._writev()
may have an adverse effect on throughput.
See also: writable.uncork()
, writable._writev()
.
Destroy the stream. Optionally emit an 'error'
event, and emit a 'close'
event (unless emitClose
is set to false
). After this call, the readable stream will release any internal resources and subsequent calls to push()
will be ignored.
Once destroy()
has been called any further calls will be a no-op and no further errors except from _destroy()
may be emitted as 'error'
.
Implementors should not override this method, but instead implement readable._destroy()
.
Error which will be passed as payload in 'error'
event
This method returns a new stream with the first limit chunks dropped from the start.
the number of chunks to drop from the readable.
a stream with limit chunks dropped from the start.
Synchronously calls each of the listeners registered for the event named eventName
, in the order they were registered, passing the supplied arguments to each.
Returns true
if the event had listeners, false
otherwise.
import { EventEmitter } from 'node:events';
const myEmitter = new EventEmitter();
// First listener
myEmitter.on('event', function firstListener() {
console.log('Helloooo! first listener');
});
// Second listener
myEmitter.on('event', function secondListener(arg1, arg2) {
console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
});
// Third listener
myEmitter.on('event', function thirdListener(...args) {
const parameters = args.join(', ');
console.log(`event with parameters ${parameters} in third listener`);
});
console.log(myEmitter.listeners('event'));
myEmitter.emit('event', 1, 2, 3, 4, 5);
// Prints:
// [
// [Function: firstListener],
// [Function: secondListener],
// [Function: thirdListener]
// ]
// Helloooo! first listener
// event with parameters 1, 2 in second listener
// event with parameters 1, 2, 3, 4, 5 in third listener
Calling the writable.end()
method signals that no more data will be written to the Writable
. The optional chunk
and encoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.
Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'.
import fs from 'node:fs';
const file = fs.createWriteStream('example.txt');
file.write('hello, ');
file.end('world!');
// Writing more now is not allowed!
Calling the writable.end()
method signals that no more data will be written to the Writable
. The optional chunk
and encoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.
Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'.
import fs from 'node:fs';
const file = fs.createWriteStream('example.txt');
file.write('hello, ');
file.end('world!');
// Writing more now is not allowed!
Optional data to write. For streams not operating in object mode, chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk
may be any JavaScript value other than null
.
Calling the writable.end()
method signals that no more data will be written to the Writable
. The optional chunk
and encoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.
Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'.
import fs from 'node:fs';
const file = fs.createWriteStream('example.txt');
file.write('hello, ');
file.end('world!');
// Writing more now is not allowed!
Optional data to write. For streams not operating in object mode, chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk
may be any JavaScript value other than null
.
The encoding if chunk
is a string
Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbol
s.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.on('foo', () => {});
myEE.on('bar', () => {});
const sym = Symbol('symbol');
myEE.on(sym, () => {});
console.log(myEE.eventNames());
// Prints: [ 'foo', 'bar', Symbol(symbol) ]
This method is similar to Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk await
ed return value is falsy, the stream is destroyed and the promise is fulfilled with false
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true
.
a function to call on each chunk of the stream. Async or not.
a promise evaluating to true
if fn returned a truthy value for every one of the chunks.
This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be await
ed.
a function to filter chunks from the stream. Async or not.
a stream filtered with the predicate fn.
This method is similar to Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined
.
a function to call on each chunk of the stream. Async or not.
a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined
if no element was found.
This method is similar to Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined
.
a function to call on each chunk of the stream. Async or not.
a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined
if no element was found.
This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
a function to map over every chunk in the stream. May be async. May be a stream or generator.
a stream flat-mapped with the function fn.
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be await
ed.
This method is different from for await...of
loops in that it can optionally process chunks concurrently. In addition, a forEach
iteration can only be stopped by having passed a signal
option and aborting the related AbortController while for await...of
can be stopped with break
or return
. In either case the stream will be destroyed.
This method is different from listening to the 'data'
event in that it uses the readable
event in the underlying machinary and can limit the number of concurrent fn calls.
a function to call on each chunk of the stream. Async or not.
a promise for when the stream has finished.
Returns the current max listener value for the EventEmitter
which is either set by emitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.
The readable.isPaused()
method returns the current operating state of the Readable
. This is used primarily by the mechanism that underlies the readable.pipe()
method. In most typical cases, there will be no reason to use this method directly.
const readable = new stream.Readable();
readable.isPaused(); // === false
readable.pause();
readable.isPaused(); // === true
readable.resume();
readable.isPaused(); // === false
The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of
loop is exited by return
, break
, or throw
, or if the iterator should destroy the stream if the stream emitted an error during iteration.
Returns the number of listeners listening for the event named eventName
. If listener
is provided, it will return how many times the listener is found in the list of the listeners of the event.
The name of the event being listened for
The event handler function
Returns a copy of the array of listeners for the event named eventName
.
server.on('connection', (stream) => {
console.log('someone connected!');
});
console.log(util.inspect(server.listeners('connection')));
// Prints: [ [Function] ]
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be await
ed before being passed to the result stream.
a function to map over every chunk in the stream. Async or not.
a stream mapped with the function fn.
Alias for emitter.removeListener()
.
Adds the listener
function to the end of the listeners array for the event named eventName
. No checks are made to see if the listener
has already been added. Multiple calls passing the same combination of eventName
and listener
will result in the listener
being added, and called, multiple times.
server.on('connection', (stream) => {
console.log('someone connected!');
});
Returns a reference to the EventEmitter
, so that calls can be chained.
By default, event listeners are invoked in the order they are added. The emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.on('foo', () => console.log('a'));
myEE.prependListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
The callback function
Adds a one-time listener
function for the event named eventName
. The next time eventName
is triggered, this listener is removed and then invoked.
server.once('connection', (stream) => {
console.log('Ah, we have our first user!');
});
Returns a reference to the EventEmitter
, so that calls can be chained.
By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.once('foo', () => console.log('a'));
myEE.prependOnceListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
The callback function
The readable.pause()
method will cause a stream in flowing mode to stop emitting 'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.
const readable = getReadableStreamSomehow();
readable.on('data', (chunk) => {
console.log(`Received ${chunk.length} bytes of data.`);
readable.pause();
console.log('There will be no additional data for 1 second.');
setTimeout(() => {
console.log('Now data will start flowing again.');
readable.resume();
}, 1000);
});
The readable.pause()
method has no effect if there is a 'readable'
event listener.
Adds the listener
function to the beginning of the listeners array for the event named eventName
. No checks are made to see if the listener
has already been added. Multiple calls passing the same combination of eventName
and listener
will result in the listener
being added, and called, multiple times.
server.prependListener('connection', (stream) => {
console.log('someone connected!');
});
Returns a reference to the EventEmitter
, so that calls can be chained.
The callback function
Adds a one-timelistener
function for the event named eventName
to the beginning of the listeners array. The next time eventName
is triggered, this listener is removed, and then invoked.
server.prependOnceListener('connection', (stream) => {
console.log('Ah, we have our first user!');
});
Returns a reference to the EventEmitter
, so that calls can be chained.
The callback function
Initiates a push stream. The callback is invoked with the new Http2Stream
instance created for the push stream passed as the second argument, or an Error
passed as the first argument.
import http2 from 'node:http2';
const server = http2.createServer();
server.on('stream', (stream) => {
stream.respond({ ':status': 200 });
stream.pushStream({ ':path': '/' }, (err, pushStream, headers) => {
if (err) throw err;
pushStream.respond({ ':status': 200 });
pushStream.end('some pushed data');
});
stream.end('some data');
});
Setting the weight of a push stream is not allowed in the HEADERS
frame. Pass a weight
value to http2stream.priority
with the silent
option set to true
to enable server-side bandwidth balancing between concurrent streams.
Calling http2stream.pushStream()
from within a pushed stream is not permitted and will throw an error.
Callback that is called once the push stream has been initiated.
Returns a copy of the array of listeners for the event named eventName
, including any wrappers (such as those created by .once()
).
import { EventEmitter } from 'node:events';
const emitter = new EventEmitter();
emitter.once('log', () => console.log('log once'));
// Returns a new Array with a function `onceWrapper` which has a property
// `listener` which contains the original listener bound above
const listeners = emitter.rawListeners('log');
const logFnWrapper = listeners[0];
// Logs "log once" to the console and does not unbind the `once` event
logFnWrapper.listener();
// Logs "log once" to the console and removes the listener
logFnWrapper();
emitter.on('log', () => console.log('log persistently'));
// Will return a new Array with a single function bound by `.on()` above
const newListeners = emitter.rawListeners('log');
// Logs "log persistently" twice
newListeners[0]();
emitter.emit('log');
The readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read, null
is returned. By default, the data is returned as a Buffer
object unless an encoding has been specified using the readable.setEncoding()
method or the stream is operating in object mode.
The optional size
argument specifies a specific number of bytes to read. If size
bytes are not available to be read, null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.
If the size
argument is not specified, all of the data contained in the internal buffer will be returned.
The size
argument must be less than or equal to 1 GiB.
The readable.read()
method should only be called on Readable
streams operating in paused mode. In flowing mode, readable.read()
is called automatically until the internal buffer is fully drained.
const readable = getReadableStreamSomehow();
// 'readable' may be triggered multiple times as data is buffered in
readable.on('readable', () => {
let chunk;
console.log('Stream is readable (new data received in buffer)');
// Use a loop to make sure we read all currently available data
while (null !== (chunk = readable.read())) {
console.log(`Read ${chunk.length} bytes of data...`);
}
});
// 'end' will be triggered once when there is no more data available
readable.on('end', () => {
console.log('Reached end of stream.');
});
Each call to readable.read()
returns a chunk of data, or null
. The chunks are not concatenated. A while
loop is necessary to consume all data currently in the buffer. When reading a large file .read()
may return null
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable'
event will be emitted when there is more data in the buffer. Finally the 'end'
event will be emitted when there is no more data to come.
Therefore to read a file's whole contents from a readable
, it is necessary to collect chunks across multiple 'readable'
events:
const chunks = [];
readable.on('readable', () => {
let chunk;
while (null !== (chunk = readable.read())) {
chunks.push(chunk);
}
});
readable.on('end', () => {
const content = chunks.join('');
});
A Readable
stream in object mode will always return a single item from a call to readable.read(size)
, regardless of the value of the size
argument.
If the readable.read()
method returns a chunk of data, a 'data'
event will also be emitted.
Calling read after the 'end'
event has been emitted will return null
. No runtime error will be raised.
Optional argument to specify how much data to read.
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError
with the ERR_INVALID_ARGS
code property.
The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map
method.
a reducer function to call over every chunk in the stream. Async or not.
the initial value to use in the reduction.
a promise for the final value of the reduction.
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError
with the ERR_INVALID_ARGS
code property.
The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map
method.
a reducer function to call over every chunk in the stream. Async or not.
the initial value to use in the reduction.
a promise for the final value of the reduction.
Removes all listeners, or those of the specified eventName
.
It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).
Returns a reference to the EventEmitter
, so that calls can be chained.
Removes the specified listener
from the listener array for the event named eventName
.
const callback = (stream) => {
console.log('someone connected!');
};
server.on('connection', callback);
// ...
server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName
, then removeListener()
must be called multiple times to remove each instance.
Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener()
or removeAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
const callbackA = () => {
console.log('A');
myEmitter.removeListener('event', callbackB);
};
const callbackB = () => {
console.log('B');
};
myEmitter.on('event', callbackA);
myEmitter.on('event', callbackB);
// callbackA removes listener callbackB but it will still be called.
// Internal listener array at time of emit [callbackA, callbackB]
myEmitter.emit('event');
// Prints:
// A
// B
// callbackB is now removed.
// Internal listener array [callbackA]
myEmitter.emit('event');
// Prints:
// A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners()
method will need to be recreated.
When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener()
will remove the most recently added instance. In the example the once('ping')
listener is removed:
import { EventEmitter } from 'node:events';
const ee = new EventEmitter();
function pong() {
console.log('pong');
}
ee.on('ping', pong);
ee.once('ping', pong);
ee.removeListener('ping', pong);
ee.emit('ping');
ee.emit('ping');
Returns a reference to the EventEmitter
, so that calls can be chained.
import http2 from 'node:http2';
const server = http2.createServer();
server.on('stream', (stream) => {
stream.respond({ ':status': 200 });
stream.end('some data');
});
Initiates a response. When the options.waitForTrailers
option is set, the 'wantTrailers'
event will be emitted immediately after queuing the last chunk of payload data to be sent. The http2stream.sendTrailers()
method can then be used to send trailing header fields to the peer.
When options.waitForTrailers
is set, the Http2Stream
will not automatically close when the final DATA
frame is transmitted. User code must call either http2stream.sendTrailers()
or http2stream.close()
to close the Http2Stream
.
import http2 from 'node:http2';
const server = http2.createServer();
server.on('stream', (stream) => {
stream.respond({ ':status': 200 }, { waitForTrailers: true });
stream.on('wantTrailers', () => {
stream.sendTrailers({ ABC: 'some value to send' });
});
stream.end('some data');
});
Initiates a response whose data is read from the given file descriptor. No validation is performed on the given file descriptor. If an error occurs while attempting to read data using the file descriptor, the Http2Stream
will be closed using an RST_STREAM
frame using the standard INTERNAL_ERROR
code.
When used, the Http2Stream
object's Duplex
interface will be closed automatically.
import http2 from 'node:http2';
import fs from 'node:fs';
const server = http2.createServer();
server.on('stream', (stream) => {
const fd = fs.openSync('/some/file', 'r');
const stat = fs.fstatSync(fd);
const headers = {
'content-length': stat.size,
'last-modified': stat.mtime.toUTCString(),
'content-type': 'text/plain; charset=utf-8',
};
stream.respondWithFD(fd, headers);
stream.on('close', () => fs.closeSync(fd));
});
The optional options.statCheck
function may be specified to give user code an opportunity to set additional content headers based on the fs.Stat
details of the given fd. If the statCheck
function is provided, the http2stream.respondWithFD()
method will perform an fs.fstat()
call to collect details on the provided file descriptor.
The offset
and length
options may be used to limit the response to a specific range subset. This can be used, for instance, to support HTTP Range requests.
The file descriptor or FileHandle
is not closed when the stream is closed, so it will need to be closed manually once it is no longer needed. Using the same file descriptor concurrently for multiple streams is not supported and may result in data loss. Re-using a file descriptor after a stream has finished is supported.
When the options.waitForTrailers
option is set, the 'wantTrailers'
event will be emitted immediately after queuing the last chunk of payload data to be sent. The http2stream.sendTrailers()
method can then be used to sent trailing header fields to the peer.
When options.waitForTrailers
is set, the Http2Stream
will not automatically close when the final DATA
frame is transmitted. User code must call either http2stream.sendTrailers()
or http2stream.close()
to close the Http2Stream
.
import http2 from 'node:http2';
import fs from 'node:fs';
const server = http2.createServer();
server.on('stream', (stream) => {
const fd = fs.openSync('/some/file', 'r');
const stat = fs.fstatSync(fd);
const headers = {
'content-length': stat.size,
'last-modified': stat.mtime.toUTCString(),
'content-type': 'text/plain; charset=utf-8',
};
stream.respondWithFD(fd, headers, { waitForTrailers: true });
stream.on('wantTrailers', () => {
stream.sendTrailers({ ABC: 'some value to send' });
});
stream.on('close', () => fs.closeSync(fd));
});
A readable file descriptor.
Sends a regular file as the response. The path
must specify a regular file or an 'error'
event will be emitted on the Http2Stream
object.
When used, the Http2Stream
object's Duplex
interface will be closed automatically.
The optional options.statCheck
function may be specified to give user code an opportunity to set additional content headers based on the fs.Stat
details of the given file:
If an error occurs while attempting to read the file data, the Http2Stream
will be closed using an RST_STREAM
frame using the standard INTERNAL_ERROR
code. If the onError
callback is defined, then it will be called. Otherwise, the stream will be destroyed.
Example using a file path:
import http2 from 'node:http2';
const server = http2.createServer();
server.on('stream', (stream) => {
function statCheck(stat, headers) {
headers['last-modified'] = stat.mtime.toUTCString();
}
function onError(err) {
// stream.respond() can throw if the stream has been destroyed by
// the other side.
try {
if (err.code === 'ENOENT') {
stream.respond({ ':status': 404 });
} else {
stream.respond({ ':status': 500 });
}
} catch (err) {
// Perform actual error handling.
console.error(err);
}
stream.end();
}
stream.respondWithFile('/some/file',
{ 'content-type': 'text/plain; charset=utf-8' },
{ statCheck, onError });
});
The options.statCheck
function may also be used to cancel the send operation by returning false
. For instance, a conditional request may check the stat results to determine if the file has been modified to return an appropriate 304
response:
import http2 from 'node:http2';
const server = http2.createServer();
server.on('stream', (stream) => {
function statCheck(stat, headers) {
// Check the stat here...
stream.respond({ ':status': 304 });
return false; // Cancel the send operation
}
stream.respondWithFile('/some/file',
{ 'content-type': 'text/plain; charset=utf-8' },
{ statCheck });
});
The content-length
header field will be automatically set.
The offset
and length
options may be used to limit the response to a specific range subset. This can be used, for instance, to support HTTP Range requests.
The options.onError
function may also be used to handle all the errors that could happen before the delivery of the file is initiated. The default behavior is to destroy the stream.
When the options.waitForTrailers
option is set, the 'wantTrailers'
event will be emitted immediately after queuing the last chunk of payload data to be sent. The http2stream.sendTrailers()
method can then be used to sent trailing header fields to the peer.
When options.waitForTrailers
is set, the Http2Stream
will not automatically close when the final DATA
frame is transmitted. User code must call eitherhttp2stream.sendTrailers()
or http2stream.close()
to close theHttp2Stream
.
import http2 from 'node:http2';
const server = http2.createServer();
server.on('stream', (stream) => {
stream.respondWithFile('/some/file',
{ 'content-type': 'text/plain; charset=utf-8' },
{ waitForTrailers: true });
stream.on('wantTrailers', () => {
stream.sendTrailers({ ABC: 'some value to send' });
});
});
The readable.resume()
method causes an explicitly paused Readable
stream to resume emitting 'data'
events, switching the stream into flowing mode.
The readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:
getReadableStreamSomehow()
.resume()
.on('end', () => {
console.log('Reached the end, but did not read anything.');
});
The readable.resume()
method has no effect if there is a 'readable'
event listener.
Sends a trailing HEADERS
frame to the connected HTTP/2 peer. This method will cause the Http2Stream
to be immediately closed and must only be called after the 'wantTrailers'
event has been emitted. When sending a request or sending a response, the options.waitForTrailers
option must be set in order to keep the Http2Stream
open after the final DATA
frame so that trailers can be sent.
import http2 from 'node:http2';
const server = http2.createServer();
server.on('stream', (stream) => {
stream.respond(undefined, { waitForTrailers: true });
stream.on('wantTrailers', () => {
stream.sendTrailers({ xyz: 'abc' });
});
stream.end('Hello World');
});
The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-header fields (e.g. ':method'
, ':path'
, etc).
The writable.setDefaultEncoding()
method sets the default encoding
for a Writable
stream.
The new default encoding
The readable.setEncoding()
method sets the character encoding for data read from the Readable
stream.
By default, no encoding is assigned and stream data will be returned as Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer
objects. For instance, calling readable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.
The Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer
objects.
const readable = getReadableStreamSomehow();
readable.setEncoding('utf8');
readable.on('data', (chunk) => {
assert.equal(typeof chunk, 'string');
console.log('Got %d characters of string data:', chunk.length);
});
The encoding to use.
By default EventEmitter
s will print a warning if more than 10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners()
method allows the limit to be modified for this specific EventEmitter
instance. The value can be set to Infinity
(or 0
) to indicate an unlimited number of listeners.
Returns a reference to the EventEmitter
, so that calls can be chained.
import http2 from 'node:http2';
const client = http2.connect('http://example.org:8000');
const { NGHTTP2_CANCEL } = http2.constants;
const req = client.request({ ':path': '/' });
// Cancel the stream if there's no activity after 5 seconds
req.setTimeout(5000, () => req.close(NGHTTP2_CANCEL));
This method is similar to Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value is true
(or any truthy value). Once an fn call on a chunk await
ed return value is truthy, the stream is destroyed and the promise is fulfilled with true
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false
.
a function to call on each chunk of the stream. Async or not.
a promise evaluating to true
if fn returned a truthy value for at least one of the chunks.
the number of chunks to take from the readable.
a stream with limit chunks taken.
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
a promise containing an array with the contents of the stream.
The writable.uncork()
method flushes all data buffered since cork was called.
When using writable.cork()
and writable.uncork()
to manage the buffering of writes to a stream, defer calls to writable.uncork()
using process.nextTick()
. Doing so allows batching of all writable.write()
calls that occur within a given Node.js event loop phase.
stream.cork();
stream.write('some ');
stream.write('data ');
process.nextTick(() => stream.uncork());
If the writable.cork()
method is called multiple times on a stream, the same number of calls to writable.uncork()
must be called to flush the buffered data.
stream.cork();
stream.write('some ');
stream.cork();
stream.write('data ');
process.nextTick(() => {
stream.uncork();
// The data will not be flushed until uncork() is called a second time.
stream.uncork();
});
See also: writable.cork()
.
The readable.unpipe()
method detaches a Writable
stream previously attached using the pipe method.
If the destination
is not specified, then all pipes are detached.
If the destination
is specified, but no pipe is set up for it, then the method does nothing.
import fs from 'node:fs';
const readable = getReadableStreamSomehow();
const writable = fs.createWriteStream('file.txt');
// All the data from readable goes into 'file.txt',
// but only for the first second.
readable.pipe(writable);
setTimeout(() => {
console.log('Stop writing to file.txt.');
readable.unpipe(writable);
console.log('Manually close the file stream.');
writable.end();
}, 1000);
Optional specific stream to unpipe
Passing chunk
as null
signals the end of the stream (EOF) and behaves the same as readable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.
The readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.
The stream.unshift(chunk)
method cannot be called after the 'end'
event has been emitted or a runtime error will be thrown.
Developers using stream.unshift()
often should consider switching to use of a Transform
stream instead. See the API for stream implementers
section for more information.
// Pull off a header delimited by \n\n.
// Use unshift() if we get too much.
// Call the callback with (error, header, stream).
import { StringDecoder } from 'node:string_decoder';
function parseHeader(stream, callback) {
stream.on('error', callback);
stream.on('readable', onReadable);
const decoder = new StringDecoder('utf8');
let header = '';
function onReadable() {
let chunk;
while (null !== (chunk = stream.read())) {
const str = decoder.write(chunk);
if (str.includes('\n\n')) {
// Found the header boundary.
const split = str.split(/\n\n/);
header += split.shift();
const remaining = split.join('\n\n');
const buf = Buffer.from(remaining, 'utf8');
stream.removeListener('error', callback);
// Remove the 'readable' listener before unshifting.
stream.removeListener('readable', onReadable);
if (buf.length)
stream.unshift(buf);
// Now the body of the message can be read from the stream.
callback(null, header, stream);
return;
}
// Still reading the header.
header += str;
}
}
}
Unlike push, stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift()
while in the process of performing a read.
Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} or null
. For object mode streams, chunk
may be any JavaScript value.
Encoding of string chunks. Must be a valid Buffer
encoding, such as 'utf8'
or 'ascii'
.
Prior to Node.js 0.10, streams did not implement the entire node:stream
module API as it is currently defined. (See Compatibility
for more information.)
When using an older Node.js library that emits 'data'
events and has a pause method that is advisory only, the readable.wrap()
method can be used to create a Readable
stream that uses the old stream as its data source.
It will rarely be necessary to use readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.
import { OldReader } from './old-api-module.js';
import { Readable } from 'node:stream';
const oreader = new OldReader();
const myReader = new Readable().wrap(oreader);
myReader.on('readable', () => {
myReader.read(); // etc.
});
An "old style" readable stream
The writable.write()
method writes some data to the stream, and calls the supplied callback
once the data has been fully handled. If an error occurs, the callback
will be called with the error as its first argument. The callback
is called asynchronously and before 'error'
is emitted.
The return value is true
if the internal buffer is less than the highWaterMark
configured when the stream was created after admitting chunk
. If false
is returned, further attempts to write data to the stream should stop until the 'drain'
event is emitted.
While a stream is not draining, calls to write()
will buffer chunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain'
event will be emitted. Once write()
returns false, do not write more chunks until the 'drain'
event is emitted. While calling write()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.
Writing data while the stream is not draining is particularly problematic for a Transform
, because the Transform
streams are paused by default until they are piped or a 'data'
or 'readable'
event handler is added.
If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable
and use pipe. However, if calling write()
is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain'
event:
function write(data, cb) {
if (!stream.write(data)) {
stream.once('drain', cb);
} else {
process.nextTick(cb);
}
}
// Wait for cb to be called before doing any other write.
write('hello', () => {
console.log('Write completed, do more writes now.');
});
A Writable
stream in object mode will always ignore the encoding
argument.
Optional data to write. For streams not operating in object mode, chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk
may be any JavaScript value other than null
.
Callback for when this chunk of data is flushed.
false
if the stream wishes for the calling code to wait for the 'drain'
event to be emitted before continuing to write additional data; otherwise true
.
The writable.write()
method writes some data to the stream, and calls the supplied callback
once the data has been fully handled. If an error occurs, the callback
will be called with the error as its first argument. The callback
is called asynchronously and before 'error'
is emitted.
The return value is true
if the internal buffer is less than the highWaterMark
configured when the stream was created after admitting chunk
. If false
is returned, further attempts to write data to the stream should stop until the 'drain'
event is emitted.
While a stream is not draining, calls to write()
will buffer chunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain'
event will be emitted. Once write()
returns false, do not write more chunks until the 'drain'
event is emitted. While calling write()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.
Writing data while the stream is not draining is particularly problematic for a Transform
, because the Transform
streams are paused by default until they are piped or a 'data'
or 'readable'
event handler is added.
If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable
and use pipe. However, if calling write()
is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain'
event:
function write(data, cb) {
if (!stream.write(data)) {
stream.once('drain', cb);
} else {
process.nextTick(cb);
}
}
// Wait for cb to be called before doing any other write.
write('hello', () => {
console.log('Write completed, do more writes now.');
});
A Writable
stream in object mode will always ignore the encoding
argument.
Optional data to write. For streams not operating in object mode, chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk
may be any JavaScript value other than null
.
The encoding, if chunk
is a string.
Callback for when this chunk of data is flushed.
false
if the stream wishes for the calling code to wait for the 'drain'
event to be emitted before continuing to write additional data; otherwise true
.