A Http2ServerRequest
object is created by Server or SecureServer and passed as the first argument to the 'request'
event. It may be used to access a request status, headers, and data.
Node.js module
http2
The 'node:http2'
module provides an API for HTTP/2 clients and servers, including support for multiplexing streams, HPACK header compression, and server push.
Works in Bun
Client & server are implemented (95.25% of gRPC's test suite passes). Some options, the ALTSVC extension, and server push functionality are missing.
namespace constants
class Http2ServerRequest
- readonly complete: boolean
The
request.complete
property will betrue
if the request has been completed, aborted, or destroyed. - readonly headers: IncomingHttpHeaders
The request/response headers object.
Key-value pairs of header names and values. Header names are lower-cased.
// Prints something like: // // { 'user-agent': 'curl/7.22.0', // host: '127.0.0.1:8000', // accept: '*' } console.log(request.headers);
See
HTTP/2 Headers Object
.In HTTP/2, the request path, host name, protocol, and method are represented as special headers prefixed with the
:
character (e.g.':path'
). These special headers will be included in therequest.headers
object. Care must be taken not to inadvertently modify these special headers or errors may occur. For instance, removing all headers from the request will cause errors to occur:removeAllHeaders(request.headers); assert(request.url); // Fails because the :path header has been removed
- readonly httpVersion: string
In case of server request, the HTTP version sent by the client. In the case of client response, the HTTP version of the connected-to server. Returns
'2.0'
.Also
message.httpVersionMajor
is the first integer andmessage.httpVersionMinor
is the second. - readonly rawHeaders: string[]
The raw request/response headers list exactly as they were received.
The keys and values are in the same list. It is not a list of tuples. So, the even-numbered offsets are key values, and the odd-numbered offsets are the associated values.
Header names are not lowercased, and duplicates are not merged.
// Prints something like: // // [ 'user-agent', // 'this is invalid because there can be only one', // 'User-Agent', // 'curl/7.22.0', // 'Host', // '127.0.0.1:8000', // 'ACCEPT', // '*' ] console.log(request.rawHeaders);
- readonly rawTrailers: string[]
The raw request/response trailer keys and values exactly as they were received. Only populated at the
'end'
event. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly scheme: string
The request scheme pseudo header field indicating the scheme portion of the target URL.
- readonly socket: Socket | TLSSocket
Returns a
Proxy
object that acts as anet.Socket
(ortls.TLSSocket
) but applies getters, setters, and methods based on HTTP/2 logic.destroyed
,readable
, andwritable
properties will be retrieved from and set onrequest.stream
.destroy
,emit
,end
,on
andonce
methods will be called onrequest.stream
.setTimeout
method will be called onrequest.stream.session
.pause
,read
,resume
, andwrite
will throw an error with codeERR_HTTP2_NO_SOCKET_MANIPULATION
. SeeHttp2Session and Sockets
for more information.All other interactions will be routed directly to the socket. With TLS support, use
request.socket.getPeerCertificate()
to obtain the client's authentication details. - readonly trailers: IncomingHttpHeaders
The request/response trailers object. Only populated at the
'end'
event. - url: string
Request URL string. This contains only the URL that is present in the actual HTTP request. If the request is:
GET /status?name=ryan HTTP/1.1 Accept: text/plain
Then
request.url
will be:'/status?name=ryan'
To parse the url into its parts,
new URL()
can be used:$ node > new URL('/status?name=ryan', 'http://example.com') URL { href: 'http://example.com/status?name=ryan', origin: 'http://example.com', protocol: 'http:', username: '', password: '', host: 'example.com', hostname: 'example.com', port: '', pathname: '/status', search: '?name=ryan', searchParams: URLSearchParams { 'name' => 'ryan' }, hash: '' }
- static captureRejections: boolean
Value: boolean
Change the default
captureRejections
option on all newEventEmitter
objects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')
See how to write a custom
rejection handler
. - static defaultMaxListeners: number
By default, a maximum of
10
listeners can be registered for any single event. This limit can be changed for individualEventEmitter
instances using theemitter.setMaxListeners(n)
method. To change the default for allEventEmitter
instances, theevents.defaultMaxListeners
property can be used. If this value is not a positive number, aRangeError
is thrown.Take caution when setting the
events.defaultMaxListeners
because the change affects allEventEmitter
instances, including those created before the change is made. However, callingemitter.setMaxListeners(n)
still has precedence overevents.defaultMaxListeners
.This is not a hard limit. The
EventEmitter
instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter
, theemitter.getMaxListeners()
andemitter.setMaxListeners()
methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });
The
--trace-warnings
command-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')
and will have the additionalemitter
,type
, andcount
properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsname
property is set to'MaxListenersExceededWarning'
. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'
events. Listeners installed using this symbol are called before the regular'error'
listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'
event is emitted. Therefore, the process will still crash if no regular'error'
listener is installed. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'aborted',listener: (hadError: boolean, code: number) => void): this;
Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
event: 'close',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
event: 'data',): this;Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'aborted',hadError: boolean,code: number): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'aborted',listener: (hadError: boolean, code: number) => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'aborted',listener: (hadError: boolean, code: number) => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'aborted',listener: (hadError: boolean, code: number) => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'aborted',listener: (hadError: boolean, code: number) => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'data',): this; - eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - msecs: number,callback?: () => void): void;
Sets the
Http2Stream
's timeout value tomsecs
. If a callback is provided, then it is added as a listener on the'timeout'
event on the response object.If no
'timeout'
listener is added to the request, the response, or the server, thenHttp2Stream
s are destroyed when they time out. If a handler is assigned to the request, the response, or the server's'timeout'
events, timed out sockets must be handled explicitly. - some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- ): Disposable;
Listens once to the
abort
event on the providedsignal
.Listening to the
abort
event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation()
. Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignal
s in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagation
does not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }
@returnsDisposable that removes the
abort
listener. - iterable: Iterable<any, any, any> | AsyncIterable<any, any, any>,
A utility method for creating Readable Streams out of iterators.
@param iterableObject implementing the
Symbol.asyncIterator
orSymbol.iterator
iterable protocol. Emits an 'error' event if a null value is passed.@param optionsOptions provided to
new stream.Readable([options])
. By default,Readable.from()
will setoptions.objectMode
totrue
, unless this is explicitly opted out by settingoptions.objectMode
tofalse
. A utility method for creating a
Readable
from a webReadableStream
.- name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.For
EventEmitter
s this behaves exactly the same as calling.listeners
on the emitter.For
EventTarget
s this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] }
- ): number;
Returns the currently set max amount of listeners.
For
EventEmitter
s this behaves exactly the same as calling.getMaxListeners
on the emitter.For
EventTarget
s this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 }
- ): boolean;
Returns whether the stream has been read from or cancelled.
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
eventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
- n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);
@param nA non-negative number. The maximum number of listeners per
EventTarget
event.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
n
is set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
ReadableStream
from aReadable
.
class Http2ServerResponse<Request extends Http2ServerRequest = Http2ServerRequest>
This object is created internally by an HTTP server, not by the user. It is passed as the second parameter to the
'request'
event.- sendDate: boolean
When true, the Date header will be automatically generated and sent in the response if it is not already present in the headers. Defaults to true.
This should only be disabled for testing; HTTP requires the Date header in responses.
- readonly socket: Socket | TLSSocket
Returns a
Proxy
object that acts as anet.Socket
(ortls.TLSSocket
) but applies getters, setters, and methods based on HTTP/2 logic.destroyed
,readable
, andwritable
properties will be retrieved from and set onresponse.stream
.destroy
,emit
,end
,on
andonce
methods will be called onresponse.stream
.setTimeout
method will be called onresponse.stream.session
.pause
,read
,resume
, andwrite
will throw an error with codeERR_HTTP2_NO_SOCKET_MANIPULATION
. SeeHttp2Session and Sockets
for more information.All other interactions will be routed directly to the socket.
import http2 from 'node:http2'; const server = http2.createServer((req, res) => { const ip = req.socket.remoteAddress; const port = req.socket.remotePort; res.end(`Your IP address is ${ip} and your source port is ${port}.`); }).listen(3000);
- statusCode: number
When using implicit headers (not calling
response.writeHead()
explicitly), this property controls the status code that will be sent to the client when the headers get flushed.response.statusCode = 404;
After response header was sent to the client, this property indicates the status code which was sent out.
- statusMessage: ''
Status message is not supported by HTTP/2 (RFC 7540 8.1.2.4). It returns an empty string.
- readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'
. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. - static captureRejections: boolean
Value: boolean
Change the default
captureRejections
option on all newEventEmitter
objects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')
See how to write a custom
rejection handler
. - static defaultMaxListeners: number
By default, a maximum of
10
listeners can be registered for any single event. This limit can be changed for individualEventEmitter
instances using theemitter.setMaxListeners(n)
method. To change the default for allEventEmitter
instances, theevents.defaultMaxListeners
property can be used. If this value is not a positive number, aRangeError
is thrown.Take caution when setting the
events.defaultMaxListeners
because the change affects allEventEmitter
instances, including those created before the change is made. However, callingemitter.setMaxListeners(n)
still has precedence overevents.defaultMaxListeners
.This is not a hard limit. The
EventEmitter
instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter
, theemitter.getMaxListeners()
andemitter.setMaxListeners()
methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });
The
--trace-warnings
command-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')
and will have the additionalemitter
,type
, andcount
properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsname
property is set to'MaxListenersExceededWarning'
. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'
events. Listeners installed using this symbol are called before the regular'error'
listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'
event is emitted. Therefore, the process will still crash if no regular'error'
listener is installed. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
- ): void;
This method adds HTTP trailing headers (a header but at the end of the message) to the response.
Attempting to set a header field name or value that contains invalid characters will result in a
TypeError
being thrown. - name: string,value: string | string[]): void;
Append a single header value to the header object.
If the value is an array, this is equivalent to calling this method multiple times.
If there were no previous values for the header, this is equivalent to calling setHeader.
Attempting to set a header field name or value that contains invalid characters will result in a TypeError being thrown.
// Returns headers including "set-cookie: a" and "set-cookie: b" const server = http2.createServer((req, res) => { res.setHeader('set-cookie', 'a'); res.appendHeader('set-cookie', 'b'); res.writeHead(200); res.end('ok'); });
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): void;
Call
http2stream.pushStream()
with the given headers, and wrap the givenHttp2Stream
on a newly createdHttp2ServerResponse
as the callback parameter if successful. WhenHttp2ServerRequest
is closed, the callback is called with an errorERR_HTTP2_INVALID_STREAM
.@param headersAn object describing the headers
@param callbackCalled once
http2stream.pushStream()
is finished, or either when the attempt to create the pushedHttp2Stream
has failed or has been rejected, or the state ofHttp2ServerRequest
is closed prior to calling thehttp2stream.pushStream()
method - ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the writable stream has ended and subsequent calls towrite()
orend()
will result in anERR_STREAM_DESTROYED
error. This is a destructive and immediate way to destroy a stream. Previous calls towrite()
may not have drained, and may trigger anERR_STREAM_DESTROYED
error. Useend()
instead of destroy if data should flush before close, or wait for the'drain'
event before destroying the stream.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
writable._destroy()
.@param errorOptional, an error to emit with
'error'
event. - emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(callback?: () => void): this;
This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method,
response.end()
, MUST be called on each response.If
data
is specified, it is equivalent to callingresponse.write(data, encoding)
followed byresponse.end(callback)
.If
callback
is specified, it will be called when the response stream is finished.end(callback?: () => void): this;This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method,
response.end()
, MUST be called on each response.If
data
is specified, it is equivalent to callingresponse.write(data, encoding)
followed byresponse.end(callback)
.If
callback
is specified, it will be called when the response stream is finished.end(encoding: BufferEncoding,callback?: () => void): this;This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method,
response.end()
, MUST be called on each response.If
data
is specified, it is equivalent to callingresponse.write(data, encoding)
followed byresponse.end(callback)
.If
callback
is specified, it will be called when the response stream is finished. Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- name: string): string;
Reads out a header that has already been queued but not sent to the client. The name is case-insensitive.
const contentType = response.getHeader('content-type');
Returns an array containing the unique names of the current outgoing headers. All header names are lowercase.
response.setHeader('Foo', 'bar'); response.setHeader('Set-Cookie', ['foo=bar', 'bar=baz']); const headerNames = response.getHeaderNames(); // headerNames === ['foo', 'set-cookie']
Returns a shallow copy of the current outgoing headers. Since a shallow copy is used, array values may be mutated without additional calls to various header-related http module methods. The keys of the returned object are the header names and the values are the respective header values. All header names are lowercase.
The object returned by the
response.getHeaders()
method does not prototypically inherit from the JavaScriptObject
. This means that typicalObject
methods such asobj.toString()
,obj.hasOwnProperty()
, and others are not defined and will not work.response.setHeader('Foo', 'bar'); response.setHeader('Set-Cookie', ['foo=bar', 'bar=baz']); const headers = response.getHeaders(); // headers === { foo: 'bar', 'set-cookie': ['foo=bar', 'bar=baz'] }
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.- name: string): boolean;
Returns
true
if the header identified byname
is currently set in the outgoing headers. The header name matching is case-insensitive.const hasContentType = response.hasHeader('content-type');
- eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - name: string): void;
Removes a header that has been queued for implicit sending.
response.removeHeader('Content-Encoding');
- event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. - encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- name: string,value: string | number | readonly string[]): void;
Sets a single header value for implicit headers. If this header already exists in the to-be-sent headers, its value will be replaced. Use an array of strings here to send multiple headers with the same name.
response.setHeader('Content-Type', 'text/html; charset=utf-8');
or
response.setHeader('Set-Cookie', ['type=ninja', 'language=javascript']);
Attempting to set a header field name or value that contains invalid characters will result in a
TypeError
being thrown.When headers have been set with
response.setHeader()
, they will be merged with any headers passed toresponse.writeHead()
, with the headers passed toresponse.writeHead()
given precedence.// Returns content-type = text/plain const server = http2.createServer((req, res) => { res.setHeader('Content-Type', 'text/html; charset=utf-8'); res.setHeader('X-Foo', 'bar'); res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' }); res.end('ok'); });
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - msecs: number,callback?: () => void): void;
Sets the
Http2Stream
's timeout value tomsecs
. If a callback is provided, then it is added as a listener on the'timeout'
event on the response object.If no
'timeout'
listener is added to the request, the response, or the server, thenHttp2Stream
s are destroyed when they time out. If a handler is assigned to the request, the response, or the server's'timeout'
events, timed out sockets must be handled explicitly. The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- ): boolean;
If this method is called and
response.writeHead()
has not been called, it will switch to implicit header mode and flush the implicit headers.This sends a chunk of the response body. This method may be called multiple times to provide successive parts of the body.
In the
node:http
module, the response body is omitted when the request is a HEAD request. Similarly, the204
and304
responses must not include a message body.chunk
can be a string or a buffer. Ifchunk
is a string, the second parameter specifies how to encode it into a byte stream. By default theencoding
is'utf8'
.callback
will be called when this chunk of data is flushed.This is the raw HTTP body and has nothing to do with higher-level multi-part body encodings that may be used.
The first time
response.write()
is called, it will send the buffered header information and the first chunk of the body to the client. The second timeresponse.write()
is called, Node.js assumes data will be streamed, and sends the new data separately. That is, the response is buffered up to the first chunk of the body.Returns
true
if the entire data was flushed successfully to the kernel buffer. Returnsfalse
if all or part of the data was queued in user memory.'drain'
will be emitted when the buffer is free again.encoding: BufferEncoding,): boolean;If this method is called and
response.writeHead()
has not been called, it will switch to implicit header mode and flush the implicit headers.This sends a chunk of the response body. This method may be called multiple times to provide successive parts of the body.
In the
node:http
module, the response body is omitted when the request is a HEAD request. Similarly, the204
and304
responses must not include a message body.chunk
can be a string or a buffer. Ifchunk
is a string, the second parameter specifies how to encode it into a byte stream. By default theencoding
is'utf8'
.callback
will be called when this chunk of data is flushed.This is the raw HTTP body and has nothing to do with higher-level multi-part body encodings that may be used.
The first time
response.write()
is called, it will send the buffered header information and the first chunk of the body to the client. The second timeresponse.write()
is called, Node.js assumes data will be streamed, and sends the new data separately. That is, the response is buffered up to the first chunk of the body.Returns
true
if the entire data was flushed successfully to the kernel buffer. Returnsfalse
if all or part of the data was queued in user memory.'drain'
will be emitted when the buffer is free again. Sends a status
100 Continue
to the client, indicating that the request body should be sent. See the'checkContinue'
event onHttp2Server
andHttp2SecureServer
.- hints: Record<string, string | string[]>): void;
Sends a status
103 Early Hints
to the client with a Link header, indicating that the user agent can preload/preconnect the linked resources. Thehints
is an object containing the values of headers to be sent with early hints message.Example
const earlyHintsLink = '</styles.css>; rel=preload; as=style'; response.writeEarlyHints({ 'link': earlyHintsLink, }); const earlyHintsLinks = [ '</styles.css>; rel=preload; as=style', '</scripts.js>; rel=preload; as=script', ]; response.writeEarlyHints({ 'link': earlyHintsLinks, });
- statusCode: number,): this;
Sends a response header to the request. The status code is a 3-digit HTTP status code, like
404
. The last argument,headers
, are the response headers.Returns a reference to the
Http2ServerResponse
, so that calls can be chained.For compatibility with
HTTP/1
, a human-readablestatusMessage
may be passed as the second argument. However, because thestatusMessage
has no meaning within HTTP/2, the argument will have no effect and a process warning will be emitted.const body = 'hello world'; response.writeHead(200, { 'Content-Length': Buffer.byteLength(body), 'Content-Type': 'text/plain; charset=utf-8', });
Content-Length
is given in bytes not characters. TheBuffer.byteLength()
API may be used to determine the number of bytes in a given encoding. On outbound messages, Node.js does not check if Content-Length and the length of the body being transmitted are equal or not. However, when receiving messages, Node.js will automatically reject messages when theContent-Length
does not match the actual payload size.This method may be called at most one time on a message before
response.end()
is called.If
response.write()
orresponse.end()
are called before calling this, the implicit/mutable headers will be calculated and call this function.When headers have been set with
response.setHeader()
, they will be merged with any headers passed toresponse.writeHead()
, with the headers passed toresponse.writeHead()
given precedence.// Returns content-type = text/plain const server = http2.createServer((req, res) => { res.setHeader('Content-Type', 'text/html; charset=utf-8'); res.setHeader('X-Foo', 'bar'); res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' }); res.end('ok'); });
Attempting to set a header field name or value that contains invalid characters will result in a
TypeError
being thrown.statusCode: number,statusMessage: string,): this;Sends a response header to the request. The status code is a 3-digit HTTP status code, like
404
. The last argument,headers
, are the response headers.Returns a reference to the
Http2ServerResponse
, so that calls can be chained.For compatibility with
HTTP/1
, a human-readablestatusMessage
may be passed as the second argument. However, because thestatusMessage
has no meaning within HTTP/2, the argument will have no effect and a process warning will be emitted.const body = 'hello world'; response.writeHead(200, { 'Content-Length': Buffer.byteLength(body), 'Content-Type': 'text/plain; charset=utf-8', });
Content-Length
is given in bytes not characters. TheBuffer.byteLength()
API may be used to determine the number of bytes in a given encoding. On outbound messages, Node.js does not check if Content-Length and the length of the body being transmitted are equal or not. However, when receiving messages, Node.js will automatically reject messages when theContent-Length
does not match the actual payload size.This method may be called at most one time on a message before
response.end()
is called.If
response.write()
orresponse.end()
are called before calling this, the implicit/mutable headers will be calculated and call this function.When headers have been set with
response.setHeader()
, they will be merged with any headers passed toresponse.writeHead()
, with the headers passed toresponse.writeHead()
given precedence.// Returns content-type = text/plain const server = http2.createServer((req, res) => { res.setHeader('Content-Type', 'text/html; charset=utf-8'); res.setHeader('X-Foo', 'bar'); res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' }); res.end('ok'); });
Attempting to set a header field name or value that contains invalid characters will result in a
TypeError
being thrown. - ): Disposable;
Listens once to the
abort
event on the providedsignal
.Listening to the
abort
event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation()
. Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignal
s in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagation
does not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }
@returnsDisposable that removes the
abort
listener. - options?: Pick<WritableOptions<Writable>, 'signal' | 'decodeStrings' | 'highWaterMark' | 'objectMode'>
A utility method for creating a
Writable
from a webWritableStream
. - name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.For
EventEmitter
s this behaves exactly the same as calling.listeners
on the emitter.For
EventTarget
s this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] }
- ): number;
Returns the currently set max amount of listeners.
For
EventEmitter
s this behaves exactly the same as calling.getMaxListeners
on the emitter.For
EventTarget
s this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 }
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
eventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
- n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);
@param nA non-negative number. The maximum number of listeners per
EventTarget
event.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
n
is set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
WritableStream
from aWritable
.
This symbol can be set as a property on the HTTP/2 headers object with an array value in order to provide a list of headers considered sensitive.
Returns a
ClientHttp2Session
instance.import http2 from 'node:http2'; const client = http2.connect('https://localhost:1234'); // Use the client client.close();
@param authorityThe remote HTTP/2 server to connect to. This must be in the form of a minimal, valid URL with the
http://
orhttps://
prefix, host name, and IP port (if a non-default port is used). Userinfo (user ID and password), path, querystring, and fragment details in the URL will be ignored.@param listenerWill be registered as a one-time listener of the 'connect' event.
Returns a
ClientHttp2Session
instance.import http2 from 'node:http2'; const client = http2.connect('https://localhost:1234'); // Use the client client.close();
@param authorityThe remote HTTP/2 server to connect to. This must be in the form of a minimal, valid URL with the
http://
orhttps://
prefix, host name, and IP port (if a non-default port is used). Userinfo (user ID and password), path, querystring, and fragment details in the URL will be ignored.@param listenerWill be registered as a one-time listener of the 'connect' event.
Returns a
tls.Server
instance that creates and managesHttp2Session
instances.import http2 from 'node:http2'; import fs from 'node:fs'; const options = { key: fs.readFileSync('server-key.pem'), cert: fs.readFileSync('server-cert.pem'), }; // Create a secure HTTP/2 server const server = http2.createSecureServer(options); server.on('stream', (stream, headers) => { stream.respond({ 'content-type': 'text/html; charset=utf-8', ':status': 200, }); stream.end('<h1>Hello World</h1>'); }); server.listen(8443);
@param onRequestHandlerSee
Compatibility API
function createSecureServer<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>(onRequestHandler?: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => voidReturns a
tls.Server
instance that creates and managesHttp2Session
instances.import http2 from 'node:http2'; import fs from 'node:fs'; const options = { key: fs.readFileSync('server-key.pem'), cert: fs.readFileSync('server-cert.pem'), }; // Create a secure HTTP/2 server const server = http2.createSecureServer(options); server.on('stream', (stream, headers) => { stream.respond({ 'content-type': 'text/html; charset=utf-8', ':status': 200, }); stream.end('<h1>Hello World</h1>'); }); server.listen(8443);
@param onRequestHandlerSee
Compatibility API
Returns a
net.Server
instance that creates and managesHttp2Session
instances.Since there are no browsers known that support unencrypted HTTP/2, the use of createSecureServer is necessary when communicating with browser clients.
import http2 from 'node:http2'; // Create an unencrypted HTTP/2 server. // Since there are no browsers known that support // unencrypted HTTP/2, the use of `http2.createSecureServer()` // is necessary when communicating with browser clients. const server = http2.createServer(); server.on('stream', (stream, headers) => { stream.respond({ 'content-type': 'text/html; charset=utf-8', ':status': 200, }); stream.end('<h1>Hello World</h1>'); }); server.listen(8000);
@param onRequestHandlerSee
Compatibility API
function createServer<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>(onRequestHandler?: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => voidReturns a
net.Server
instance that creates and managesHttp2Session
instances.Since there are no browsers known that support unencrypted HTTP/2, the use of createSecureServer is necessary when communicating with browser clients.
import http2 from 'node:http2'; // Create an unencrypted HTTP/2 server. // Since there are no browsers known that support // unencrypted HTTP/2, the use of `http2.createSecureServer()` // is necessary when communicating with browser clients. const server = http2.createServer(); server.on('stream', (stream, headers) => { stream.respond({ 'content-type': 'text/html; charset=utf-8', ':status': 200, }); stream.end('<h1>Hello World</h1>'); }); server.listen(8000);
@param onRequestHandlerSee
Compatibility API
Returns an object containing the default settings for an
Http2Session
instance. This method returns a new object instance every time it is called so instances returned may be safely modified for use.Returns a
Buffer
instance containing serialized representation of the given HTTP/2 settings as specified in the HTTP/2 specification. This is intended for use with theHTTP2-Settings
header field.import http2 from 'node:http2'; const packed = http2.getPackedSettings({ enablePush: false }); console.log(packed.toString('base64')); // Prints: AAIAAAAA
Returns a
HTTP/2 Settings Object
containing the deserialized settings from the givenBuffer
as generated byhttp2.getPackedSettings()
.@param bufThe packed settings.
- function performServerHandshake<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>(
Create an HTTP/2 server session from an existing socket.
@param socketA Duplex Stream
@param optionsAny
{@link createServer}
options can be provided.
Type definitions
interface AlternativeServiceOptions
interface ClientHttp2Session
The
EventEmitter
class is defined and exposed by thenode:events
module:import { EventEmitter } from 'node:events';
All
EventEmitter
s emit the event'newListener'
when new listeners are added and'removeListener'
when existing listeners are removed.It supports the following option:
- readonly alpnProtocol?: string
Value will be
undefined
if theHttp2Session
is not yet connected to a socket,h2c
if theHttp2Session
is not connected to aTLSSocket
, or will return the value of the connectedTLSSocket
's ownalpnProtocol
property. - readonly closed: boolean
Will be
true
if thisHttp2Session
instance has been closed, otherwisefalse
. - readonly connecting: boolean
Will be
true
if thisHttp2Session
instance is still connecting, will be set tofalse
before emittingconnect
event and/or calling thehttp2.connect
callback. - readonly destroyed: boolean
Will be
true
if thisHttp2Session
instance has been destroyed and must no longer be used, otherwisefalse
. - readonly encrypted?: boolean
Value is
undefined
if theHttp2Session
session socket has not yet been connected,true
if theHttp2Session
is connected with aTLSSocket
, andfalse
if theHttp2Session
is connected to any other kind of socket or stream. - readonly localSettings: Settings
A prototype-less object describing the current local settings of this
Http2Session
. The local settings are local to thisHttp2Session
instance. - readonly originSet?: string[]
If the
Http2Session
is connected to aTLSSocket
, theoriginSet
property will return anArray
of origins for which theHttp2Session
may be considered authoritative.The
originSet
property is only available when using a secure TLS connection. - readonly pendingSettingsAck: boolean
Indicates whether the
Http2Session
is currently waiting for acknowledgment of a sentSETTINGS
frame. Will betrue
after calling thehttp2session.settings()
method. Will befalse
once all sentSETTINGS
frames have been acknowledged. - readonly remoteSettings: Settings
A prototype-less object describing the current remote settings of this
Http2Session
. The remote settings are set by the connected HTTP/2 peer. - readonly socket: Socket | TLSSocket
Returns a
Proxy
object that acts as anet.Socket
(ortls.TLSSocket
) but limits available methods to ones safe to use with HTTP/2.destroy
,emit
,end
,pause
,read
,resume
, andwrite
will throw an error with codeERR_HTTP2_NO_SOCKET_MANIPULATION
. SeeHttp2Session and Sockets
for more information.setTimeout
method will be called on thisHttp2Session
.All other interactions will be routed directly to the socket.
- readonly state: SessionState
Provides miscellaneous information about the current state of the
Http2Session
.An object describing the current status of this
Http2Session
. - readonly type: number
The
http2session.type
will be equal tohttp2.constants.NGHTTP2_SESSION_SERVER
if thisHttp2Session
instance is a server, andhttp2.constants.NGHTTP2_SESSION_CLIENT
if the instance is a client. - event: 'altsvc',listener: (alt: string, origin: string, stream: number) => void): this;
Alias for
emitter.on(eventName, listener)
.event: 'origin',listener: (origins: string[]) => void): this;Alias for
emitter.on(eventName, listener)
.event: 'connect',): this;Alias for
emitter.on(eventName, listener)
.event: 'stream',listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this;Alias for
emitter.on(eventName, listener)
.event: string | symbol,listener: (...args: any[]) => void): this;Alias for
emitter.on(eventName, listener)
. - callback?: () => void): void;
Gracefully closes the
Http2Session
, allowing any existing streams to complete on their own and preventing newHttp2Stream
instances from being created. Once closed,http2session.destroy()
might be called if there are no openHttp2Stream
instances.If specified, the
callback
function is registered as a handler for the'close'
event. - code?: number): void;
Immediately terminates the
Http2Session
and the associatednet.Socket
ortls.TLSSocket
.Once destroyed, the
Http2Session
will emit the'close'
event. Iferror
is not undefined, an'error'
event will be emitted immediately before the'close'
event.If there are any remaining open
Http2Streams
associated with theHttp2Session
, those will also be destroyed.@param errorAn
Error
object if theHttp2Session
is being destroyed due to an error.@param codeThe HTTP/2 error code to send in the final
GOAWAY
frame. If unspecified, anderror
is not undefined, the default isINTERNAL_ERROR
, otherwise defaults toNO_ERROR
. - emit(event: 'altsvc',alt: string,origin: string,stream: number): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'origin',origins: readonly string[]): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'connect',): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'stream',flags: number): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: string | symbol,...args: any[]): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.- code?: number,lastStreamID?: number,opaqueData?: ArrayBufferView<ArrayBufferLike>): void;
Transmits a
GOAWAY
frame to the connected peer without shutting down theHttp2Session
.@param codeAn HTTP/2 error code
@param lastStreamIDThe numeric ID of the last processed
Http2Stream
@param opaqueDataA
TypedArray
orDataView
instance containing additional data to be carried within theGOAWAY
frame. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'altsvc',listener: (alt: string, origin: string, stream: number) => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'origin',listener: (origins: string[]) => void): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'connect',): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'stream',listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: string | symbol,listener: (...args: any[]) => void): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'altsvc',listener: (alt: string, origin: string, stream: number) => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'origin',listener: (origins: string[]) => void): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'connect',): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'stream',listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: string | symbol,listener: (...args: any[]) => void): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- ping(): boolean;
Sends a
PING
frame to the connected HTTP/2 peer. Acallback
function must be provided. The method will returntrue
if thePING
was sent,false
otherwise.The maximum number of outstanding (unacknowledged) pings is determined by the
maxOutstandingPings
configuration option. The default maximum is 10.If provided, the
payload
must be aBuffer
,TypedArray
, orDataView
containing 8 bytes of data that will be transmitted with thePING
and returned with the ping acknowledgment.The callback will be invoked with three arguments: an error argument that will be
null
if thePING
was successfully acknowledged, aduration
argument that reports the number of milliseconds elapsed since the ping was sent and the acknowledgment was received, and aBuffer
containing the 8-bytePING
payload.session.ping(Buffer.from('abcdefgh'), (err, duration, payload) => { if (!err) { console.log(`Ping acknowledged in ${duration} milliseconds`); console.log(`With payload '${payload.toString()}'`); } });
If the
payload
argument is not specified, the default payload will be the 64-bit timestamp (little endian) marking the start of thePING
duration. - event: 'altsvc',listener: (alt: string, origin: string, stream: number) => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'origin',listener: (origins: string[]) => void): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'connect',): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'stream',listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: string | symbol,listener: (...args: any[]) => void): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'altsvc',listener: (alt: string, origin: string, stream: number) => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'origin',listener: (origins: string[]) => void): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'connect',): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'stream',listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: string | symbol,listener: (...args: any[]) => void): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
Calls
ref()
on thisHttp2Session
instance's underlyingnet.Socket
.- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - eventName: string | symbol,listener: (...args: any[]) => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. For HTTP/2 Client
Http2Session
instances only, thehttp2session.request()
creates and returns anHttp2Stream
instance that can be used to send an HTTP/2 request to the connected server.When a
ClientHttp2Session
is first created, the socket may not yet be connected. ifclienthttp2session.request()
is called during this time, the actual request will be deferred until the socket is ready to go. If thesession
is closed before the actual request be executed, anERR_HTTP2_GOAWAY_SESSION
is thrown.This method is only available if
http2session.type
is equal tohttp2.constants.NGHTTP2_SESSION_CLIENT
.import http2 from 'node:http2'; const clientSession = http2.connect('https://localhost:1234'); const { HTTP2_HEADER_PATH, HTTP2_HEADER_STATUS, } = http2.constants; const req = clientSession.request({ [HTTP2_HEADER_PATH]: '/' }); req.on('response', (headers) => { console.log(headers[HTTP2_HEADER_STATUS]); req.on('data', (chunk) => { // .. }); req.on('end', () => { // .. }); });
When the
options.waitForTrailers
option is set, the'wantTrailers'
event is emitted immediately after queuing the last chunk of payload data to be sent. Thehttp2stream.sendTrailers()
method can then be called to send trailing headers to the peer.When
options.waitForTrailers
is set, theHttp2Stream
will not automatically close when the finalDATA
frame is transmitted. User code must call eitherhttp2stream.sendTrailers()
orhttp2stream.close()
to close theHttp2Stream
.When
options.signal
is set with anAbortSignal
and thenabort
on the correspondingAbortController
is called, the request will emit an'error'
event with anAbortError
error.The
:method
and:path
pseudo-headers are not specified withinheaders
, they respectively default to::method
='GET'
:path
=/
- windowSize: number): void;
Sets the local endpoint's window size. The
windowSize
is the total window size to set, not the delta.import http2 from 'node:http2'; const server = http2.createServer(); const expectedWindowSize = 2 ** 20; server.on('connect', (session) => { // Set local window size to be 2 ** 20 session.setLocalWindowSize(expectedWindowSize); });
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - msecs: number,callback?: () => void): void;
Used to set a callback function that is called when there is no activity on the
Http2Session
aftermsecs
milliseconds. The givencallback
is registered as a listener on the'timeout'
event. - ): void;
Updates the current local settings for this
Http2Session
and sends a newSETTINGS
frame to the connected HTTP/2 peer.Once called, the
http2session.pendingSettingsAck
property will betrue
while the session is waiting for the remote peer to acknowledge the new settings.The new settings will not become effective until the
SETTINGS
acknowledgment is received and the'localSettings'
event is emitted. It is possible to send multipleSETTINGS
frames while acknowledgment is still pending.@param callbackCallback that is called once the session is connected or right away if the session is already connected.
Calls
unref()
on thisHttp2Session
instance's underlyingnet.Socket
.
interface ClientHttp2Stream
Duplex streams are streams that implement both the
Readable
andWritable
interfaces.Examples of
Duplex
streams include:TCP sockets
zlib streams
crypto streams
- readonly aborted: boolean
Set to
true
if theHttp2Stream
instance was aborted abnormally. When set, the'aborted'
event will have been emitted. - allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readonly bufferSize: number
This property shows the number of characters currently buffered to be written. See
net.Socket.bufferSize
for details. - readonly destroyed: boolean
Set to
true
if theHttp2Stream
instance has been destroyed and is no longer usable. - readonly endAfterHeaders: boolean
Set to
true
if theEND_STREAM
flag was set in the request or response HEADERS frame received, indicating that no additional data should be received and the readable side of theHttp2Stream
will be closed. - readonly id?: number
The numeric stream identifier of this
Http2Stream
instance. Set toundefined
if the stream identifier has not yet been assigned. - readonly pending: boolean
Set to
true
if theHttp2Stream
instance has not yet been assigned a numeric stream identifier. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly rstCode: number
Set to the
RST_STREAM
error code
reported when theHttp2Stream
is destroyed after either receiving anRST_STREAM
frame from the connected peer, callinghttp2stream.close()
, orhttp2stream.destroy()
. Will beundefined
if theHttp2Stream
has not been closed. - readonly sentHeaders: OutgoingHttpHeaders
An object containing the outbound headers sent for this
Http2Stream
. - readonly sentInfoHeaders?: OutgoingHttpHeaders[]
An array of objects containing the outbound informational (additional) headers sent for this
Http2Stream
. - readonly sentTrailers?: OutgoingHttpHeaders
An object containing the outbound trailers sent for this
HttpStream
. - readonly session: undefined | Http2Session
A reference to the
Http2Session
instance that owns thisHttp2Stream
. The value will beundefined
after theHttp2Stream
instance is destroyed. - readonly state: StreamState
Provides miscellaneous information about the current state of the
Http2Stream
.A current state of this
Http2Stream
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'
. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'continue',listener: () => {}): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'headers',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'push',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'response',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- code?: number,callback?: () => void): void;
Closes the
Http2Stream
instance by sending anRST_STREAM
frame to the connected HTTP/2 peer.@param codeUnsigned 32-bit integer identifying the error code.
@param callbackAn optional function registered to listen for the
'close'
event. - stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'continue'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'continue',listener: () => {}): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'continue',listener: () => {}): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'continue',listener: () => {}): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'headers',): this;event: 'push',): this;event: 'response',): this; - event: 'continue',listener: () => {}): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'headers',): this;event: 'push',): this;event: 'response',): this; - eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- ): void;
Sends a trailing
HEADERS
frame to the connected HTTP/2 peer. This method will cause theHttp2Stream
to be immediately closed and must only be called after the'wantTrailers'
event has been emitted. When sending a request or sending a response, theoptions.waitForTrailers
option must be set in order to keep theHttp2Stream
open after the finalDATA
frame so that trailers can be sent.import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respond(undefined, { waitForTrailers: true }); stream.on('wantTrailers', () => { stream.sendTrailers({ xyz: 'abc' }); }); stream.end('Hello World'); });
The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-header fields (e.g.
':method'
,':path'
, etc). - encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - msecs: number,callback?: () => void): void;
import http2 from 'node:http2'; const client = http2.connect('http://example.org:8000'); const { NGHTTP2_CANCEL } = http2.constants; const req = client.request({ ':path': '/' }); // Cancel the stream if there's no activity after 5 seconds req.setTimeout(5000, () => req.close(NGHTTP2_CANCEL));
- some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.
interface ClientSessionOptions
- unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'
][] is emitted. If the socket has not been destroyed by that time the server will destroy it.
interface ClientSessionRequestOptions
interface Http2SecureServer<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
Accepts encrypted connections using TLS or SSL.
- maxConnections: number
Set this property to reject connections when the server's connection count gets high.
It is not recommended to use this option once a socket has been sent to a child with
child_process.fork()
. Calls () and returns a promise that fulfills when the server has closed.
- hostname: string,): void;
The
server.addContext()
method adds a secure context that will be used if the client request's SNI name matches the suppliedhostname
(or wildcard).When there are multiple matching contexts, the most recently added one is used.
@param hostnameA SNI host name or wildcard (e.g.
'*'
)@param contextAn object containing any of the possible properties from the createSecureContext
options
arguments (e.g.key
,cert
,ca
, etc), or a TLS context object created with createSecureContext itself. - event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: 'sessionError',): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: 'stream',): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: 'timeout',listener: () => void): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: 'unknownProtocol',): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: string | symbol,listener: (...args: any[]) => void): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
Returns the bound
address
, the addressfamily
name, andport
of the server as reported by the operating system if listening on an IP socket (useful to find which port was assigned when getting an OS-assigned address):{ port: 12346, family: 'IPv4', address: '127.0.0.1' }
.For a server listening on a pipe or Unix domain socket, the name is returned as a string.
const server = net.createServer((socket) => { socket.end('goodbye\n'); }).on('error', (err) => { // Handle errors here. throw err; }); // Grab an arbitrary unused port. server.listen(() => { console.log('opened server on', server.address()); });
server.address()
returnsnull
before the'listening'
event has been emitted or after callingserver.close()
.- ): this;
Stops the server from accepting new connections and keeps existing connections. This function is asynchronous, the server is finally closed when all connections are ended and the server emits a
'close'
event. The optionalcallback
will be called once the'close'
event occurs. Unlike that event, it will be called with anError
as its only argument if the server was not open when it was closed.@param callbackCalled when the server is closed.
- emit(event: 'checkContinue',request: InstanceType<Http2Request>,response: InstanceType<Http2Response>): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'request',request: InstanceType<Http2Request>,response: InstanceType<Http2Response>): boolean; Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): this;
Asynchronously get the number of concurrent connections on the server. Works when sockets were sent to forks.
Callback should take two arguments
err
andcount
. Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.Returns the session ticket keys.
See
Session Resumption
for more information.@returnsA 48-byte buffer containing the session ticket keys.
- port?: number,hostname?: string,backlog?: number,listeningListener?: () => void): this;
Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
port?: number,hostname?: string,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
port?: number,backlog?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
port?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
path: string,backlog?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
path: string,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
handle: any,backlog?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
handle: any,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
- eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;on(event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this; - once(event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;once(event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this; - event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this;event: 'stream',): this; - event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this;event: 'stream',): this; - eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
Opposite of
unref()
, callingref()
on a previouslyunref
ed server will not let the program exit if it's the only server left (the default behavior). If the server isref
ed callingref()
again will have no effect.- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - eventName: string | symbol,listener: (...args: any[]) => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. - n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - ): void;
The
server.setSecureContext()
method replaces the secure context of an existing server. Existing connections to the server are not interrupted.@param optionsAn object containing any of the possible properties from the createSecureContext
options
arguments (e.g.key
,cert
,ca
, etc). - ): void;
Sets the session ticket keys.
Changes to the ticket keys are effective only for future server connections. Existing or currently pending server connections will use the previous keys.
See
Session Resumption
for more information.@param keysA 48-byte buffer containing the session ticket keys.
Calling
unref()
on a server will allow the program to exit if this is the only active server in the event system. If the server is alreadyunref
ed callingunref()
again will have no effect.- ): void;
Throws ERR_HTTP2_INVALID_SETTING_VALUE for invalid settings values. Throws ERR_INVALID_ARG_TYPE for invalid settings argument.
interface Http2Server<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
This class is used to create a TCP or
IPC
server.- maxConnections: number
Set this property to reject connections when the server's connection count gets high.
It is not recommended to use this option once a socket has been sent to a child with
child_process.fork()
. Calls () and returns a promise that fulfills when the server has closed.
- event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
events.EventEmitter
- close
- connection
- error
- listening
- drop
event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;events.EventEmitter
- close
- connection
- error
- listening
- drop
event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this;events.EventEmitter
- close
- connection
- error
- listening
- drop
event: 'sessionError',): this;events.EventEmitter
- close
- connection
- error
- listening
- drop
event: 'stream',): this;events.EventEmitter
- close
- connection
- error
- listening
- drop
event: 'timeout',listener: () => void): this;events.EventEmitter
- close
- connection
- error
- listening
- drop
event: string | symbol,listener: (...args: any[]) => void): this;events.EventEmitter
- close
- connection
- error
- listening
- drop
Returns the bound
address
, the addressfamily
name, andport
of the server as reported by the operating system if listening on an IP socket (useful to find which port was assigned when getting an OS-assigned address):{ port: 12346, family: 'IPv4', address: '127.0.0.1' }
.For a server listening on a pipe or Unix domain socket, the name is returned as a string.
const server = net.createServer((socket) => { socket.end('goodbye\n'); }).on('error', (err) => { // Handle errors here. throw err; }); // Grab an arbitrary unused port. server.listen(() => { console.log('opened server on', server.address()); });
server.address()
returnsnull
before the'listening'
event has been emitted or after callingserver.close()
.- ): this;
Stops the server from accepting new connections and keeps existing connections. This function is asynchronous, the server is finally closed when all connections are ended and the server emits a
'close'
event. The optionalcallback
will be called once the'close'
event occurs. Unlike that event, it will be called with anError
as its only argument if the server was not open when it was closed.@param callbackCalled when the server is closed.
- emit(event: 'checkContinue',request: InstanceType<Http2Request>,response: InstanceType<Http2Response>): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'request',request: InstanceType<Http2Request>,response: InstanceType<Http2Response>): boolean; Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): this;
Asynchronously get the number of concurrent connections on the server. Works when sockets were sent to forks.
Callback should take two arguments
err
andcount
. Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.- port?: number,hostname?: string,backlog?: number,listeningListener?: () => void): this;
Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
port?: number,hostname?: string,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
port?: number,backlog?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
port?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
path: string,backlog?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
path: string,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
handle: any,backlog?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
handle: any,listeningListener?: () => void): this;Start a server listening for connections. A
net.Server
can be a TCP or anIPC
server depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])
server.listen(options[, callback])
server.listen(path[, backlog][, callback])
forIPC
serversserver.listen([port[, host[, backlog]]][, callback])
for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'
event will be emitted. The last parametercallback
will be added as a listener for the'listening'
event.All
listen()
methods can take abacklog
parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlog
andsomaxconn
on Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR
(seesocket(7)
for details).The
server.listen()
method can be called again if and only if there was an error during the firstserver.listen()
call orserver.close()
has been called. Otherwise, anERR_SERVER_ALREADY_LISTEN
error will be thrown.One of the most common errors raised when listening is
EADDRINUSE
. This happens when another server is already listening on the requestedport
/path
/handle
. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });
- eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;on(event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this; - once(event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;once(event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this; - event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this;event: 'stream',): this; - event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this;event: 'stream',): this; - eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
Opposite of
unref()
, callingref()
on a previouslyunref
ed server will not let the program exit if it's the only server left (the default behavior). If the server isref
ed callingref()
again will have no effect.- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - eventName: string | symbol,listener: (...args: any[]) => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. - n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. Calling
unref()
on a server will allow the program to exit if this is the only active server in the event system. If the server is alreadyunref
ed callingunref()
again will have no effect.- ): void;
Throws ERR_HTTP2_INVALID_SETTING_VALUE for invalid settings values. Throws ERR_INVALID_ARG_TYPE for invalid settings argument.
interface Http2Session
The
EventEmitter
class is defined and exposed by thenode:events
module:import { EventEmitter } from 'node:events';
All
EventEmitter
s emit the event'newListener'
when new listeners are added and'removeListener'
when existing listeners are removed.It supports the following option:
- readonly alpnProtocol?: string
Value will be
undefined
if theHttp2Session
is not yet connected to a socket,h2c
if theHttp2Session
is not connected to aTLSSocket
, or will return the value of the connectedTLSSocket
's ownalpnProtocol
property. - readonly closed: boolean
Will be
true
if thisHttp2Session
instance has been closed, otherwisefalse
. - readonly connecting: boolean
Will be
true
if thisHttp2Session
instance is still connecting, will be set tofalse
before emittingconnect
event and/or calling thehttp2.connect
callback. - readonly destroyed: boolean
Will be
true
if thisHttp2Session
instance has been destroyed and must no longer be used, otherwisefalse
. - readonly encrypted?: boolean
Value is
undefined
if theHttp2Session
session socket has not yet been connected,true
if theHttp2Session
is connected with aTLSSocket
, andfalse
if theHttp2Session
is connected to any other kind of socket or stream. - readonly localSettings: Settings
A prototype-less object describing the current local settings of this
Http2Session
. The local settings are local to thisHttp2Session
instance. - readonly originSet?: string[]
If the
Http2Session
is connected to aTLSSocket
, theoriginSet
property will return anArray
of origins for which theHttp2Session
may be considered authoritative.The
originSet
property is only available when using a secure TLS connection. - readonly pendingSettingsAck: boolean
Indicates whether the
Http2Session
is currently waiting for acknowledgment of a sentSETTINGS
frame. Will betrue
after calling thehttp2session.settings()
method. Will befalse
once all sentSETTINGS
frames have been acknowledged. - readonly remoteSettings: Settings
A prototype-less object describing the current remote settings of this
Http2Session
. The remote settings are set by the connected HTTP/2 peer. - readonly socket: Socket | TLSSocket
Returns a
Proxy
object that acts as anet.Socket
(ortls.TLSSocket
) but limits available methods to ones safe to use with HTTP/2.destroy
,emit
,end
,pause
,read
,resume
, andwrite
will throw an error with codeERR_HTTP2_NO_SOCKET_MANIPULATION
. SeeHttp2Session and Sockets
for more information.setTimeout
method will be called on thisHttp2Session
.All other interactions will be routed directly to the socket.
- readonly state: SessionState
Provides miscellaneous information about the current state of the
Http2Session
.An object describing the current status of this
Http2Session
. - readonly type: number
The
http2session.type
will be equal tohttp2.constants.NGHTTP2_SESSION_SERVER
if thisHttp2Session
instance is a server, andhttp2.constants.NGHTTP2_SESSION_CLIENT
if the instance is a client. - event: 'error',): this;
Alias for
emitter.on(eventName, listener)
.event: 'frameError',listener: (frameType: number, errorCode: number, streamID: number) => void): this;Alias for
emitter.on(eventName, listener)
.event: 'goaway',): this;Alias for
emitter.on(eventName, listener)
.event: 'localSettings',): this;Alias for
emitter.on(eventName, listener)
.event: 'remoteSettings',): this;Alias for
emitter.on(eventName, listener)
.event: string | symbol,listener: (...args: any[]) => void): this;Alias for
emitter.on(eventName, listener)
. - callback?: () => void): void;
Gracefully closes the
Http2Session
, allowing any existing streams to complete on their own and preventing newHttp2Stream
instances from being created. Once closed,http2session.destroy()
might be called if there are no openHttp2Stream
instances.If specified, the
callback
function is registered as a handler for the'close'
event. - code?: number): void;
Immediately terminates the
Http2Session
and the associatednet.Socket
ortls.TLSSocket
.Once destroyed, the
Http2Session
will emit the'close'
event. Iferror
is not undefined, an'error'
event will be emitted immediately before the'close'
event.If there are any remaining open
Http2Streams
associated with theHttp2Session
, those will also be destroyed.@param errorAn
Error
object if theHttp2Session
is being destroyed due to an error.@param codeThe HTTP/2 error code to send in the final
GOAWAY
frame. If unspecified, anderror
is not undefined, the default isINTERNAL_ERROR
, otherwise defaults toNO_ERROR
. - emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'error',): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'frameError',frameType: number,errorCode: number,streamID: number): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'goaway',errorCode: number,lastStreamID: number,): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'localSettings',): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'ping'): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'remoteSettings',): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'timeout'): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: string | symbol,...args: any[]): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.- code?: number,lastStreamID?: number,opaqueData?: ArrayBufferView<ArrayBufferLike>): void;
Transmits a
GOAWAY
frame to the connected peer without shutting down theHttp2Session
.@param codeAn HTTP/2 error code
@param lastStreamIDThe numeric ID of the last processed
Http2Stream
@param opaqueDataA
TypedArray
orDataView
instance containing additional data to be carried within theGOAWAY
frame. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'error',): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'frameError',listener: (frameType: number, errorCode: number, streamID: number) => void): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'goaway',): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'localSettings',): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'ping',listener: () => void): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'remoteSettings',): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'timeout',listener: () => void): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: string | symbol,listener: (...args: any[]) => void): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'error',): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'frameError',listener: (frameType: number, errorCode: number, streamID: number) => void): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'goaway',): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'localSettings',): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'ping',listener: () => void): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'remoteSettings',): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'timeout',listener: () => void): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: string | symbol,listener: (...args: any[]) => void): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- ping(): boolean;
Sends a
PING
frame to the connected HTTP/2 peer. Acallback
function must be provided. The method will returntrue
if thePING
was sent,false
otherwise.The maximum number of outstanding (unacknowledged) pings is determined by the
maxOutstandingPings
configuration option. The default maximum is 10.If provided, the
payload
must be aBuffer
,TypedArray
, orDataView
containing 8 bytes of data that will be transmitted with thePING
and returned with the ping acknowledgment.The callback will be invoked with three arguments: an error argument that will be
null
if thePING
was successfully acknowledged, aduration
argument that reports the number of milliseconds elapsed since the ping was sent and the acknowledgment was received, and aBuffer
containing the 8-bytePING
payload.session.ping(Buffer.from('abcdefgh'), (err, duration, payload) => { if (!err) { console.log(`Ping acknowledged in ${duration} milliseconds`); console.log(`With payload '${payload.toString()}'`); } });
If the
payload
argument is not specified, the default payload will be the 64-bit timestamp (little endian) marking the start of thePING
duration. - event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'error',): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'frameError',listener: (frameType: number, errorCode: number, streamID: number) => void): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'goaway',): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'localSettings',): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'ping',listener: () => void): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'remoteSettings',): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'timeout',listener: () => void): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: string | symbol,listener: (...args: any[]) => void): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'error',): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'frameError',listener: (frameType: number, errorCode: number, streamID: number) => void): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'goaway',): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'localSettings',): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'ping',listener: () => void): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'remoteSettings',): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'timeout',listener: () => void): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: string | symbol,listener: (...args: any[]) => void): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
Calls
ref()
on thisHttp2Session
instance's underlyingnet.Socket
.- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - eventName: string | symbol,listener: (...args: any[]) => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. - windowSize: number): void;
Sets the local endpoint's window size. The
windowSize
is the total window size to set, not the delta.import http2 from 'node:http2'; const server = http2.createServer(); const expectedWindowSize = 2 ** 20; server.on('connect', (session) => { // Set local window size to be 2 ** 20 session.setLocalWindowSize(expectedWindowSize); });
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - msecs: number,callback?: () => void): void;
Used to set a callback function that is called when there is no activity on the
Http2Session
aftermsecs
milliseconds. The givencallback
is registered as a listener on the'timeout'
event. - ): void;
Updates the current local settings for this
Http2Session
and sends a newSETTINGS
frame to the connected HTTP/2 peer.Once called, the
http2session.pendingSettingsAck
property will betrue
while the session is waiting for the remote peer to acknowledge the new settings.The new settings will not become effective until the
SETTINGS
acknowledgment is received and the'localSettings'
event is emitted. It is possible to send multipleSETTINGS
frames while acknowledgment is still pending.@param callbackCallback that is called once the session is connected or right away if the session is already connected.
Calls
unref()
on thisHttp2Session
instance's underlyingnet.Socket
.
interface Http2Stream
Duplex streams are streams that implement both the
Readable
andWritable
interfaces.Examples of
Duplex
streams include:TCP sockets
zlib streams
crypto streams
- readonly aborted: boolean
Set to
true
if theHttp2Stream
instance was aborted abnormally. When set, the'aborted'
event will have been emitted. - allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readonly bufferSize: number
This property shows the number of characters currently buffered to be written. See
net.Socket.bufferSize
for details. - readonly destroyed: boolean
Set to
true
if theHttp2Stream
instance has been destroyed and is no longer usable. - readonly endAfterHeaders: boolean
Set to
true
if theEND_STREAM
flag was set in the request or response HEADERS frame received, indicating that no additional data should be received and the readable side of theHttp2Stream
will be closed. - readonly id?: number
The numeric stream identifier of this
Http2Stream
instance. Set toundefined
if the stream identifier has not yet been assigned. - readonly pending: boolean
Set to
true
if theHttp2Stream
instance has not yet been assigned a numeric stream identifier. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly rstCode: number
Set to the
RST_STREAM
error code
reported when theHttp2Stream
is destroyed after either receiving anRST_STREAM
frame from the connected peer, callinghttp2stream.close()
, orhttp2stream.destroy()
. Will beundefined
if theHttp2Stream
has not been closed. - readonly sentHeaders: OutgoingHttpHeaders
An object containing the outbound headers sent for this
Http2Stream
. - readonly sentInfoHeaders?: OutgoingHttpHeaders[]
An array of objects containing the outbound informational (additional) headers sent for this
Http2Stream
. - readonly sentTrailers?: OutgoingHttpHeaders
An object containing the outbound trailers sent for this
HttpStream
. - readonly session: undefined | Http2Session
A reference to the
Http2Session
instance that owns thisHttp2Stream
. The value will beundefined
after theHttp2Stream
instance is destroyed. - readonly state: StreamState
Provides miscellaneous information about the current state of the
Http2Stream
.A current state of this
Http2Stream
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'
. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'aborted',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'close',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'frameError',listener: (frameType: number, errorCode: number) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'streamClosed',listener: (code: number) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'timeout',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'trailers',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'wantTrailers',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- code?: number,callback?: () => void): void;
Closes the
Http2Stream
instance by sending anRST_STREAM
frame to the connected HTTP/2 peer.@param codeUnsigned 32-bit integer identifying the error code.
@param callbackAn optional function registered to listen for the
'close'
event. - stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'aborted'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'aborted',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'aborted',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'aborted',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'frameError',listener: (frameType: number, errorCode: number) => void): this;event: 'trailers',): this; - event: 'aborted',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'data',): this;event: 'frameError',listener: (frameType: number, errorCode: number) => void): this;event: 'trailers',): this; - eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- ): void;
Sends a trailing
HEADERS
frame to the connected HTTP/2 peer. This method will cause theHttp2Stream
to be immediately closed and must only be called after the'wantTrailers'
event has been emitted. When sending a request or sending a response, theoptions.waitForTrailers
option must be set in order to keep theHttp2Stream
open after the finalDATA
frame so that trailers can be sent.import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respond(undefined, { waitForTrailers: true }); stream.on('wantTrailers', () => { stream.sendTrailers({ xyz: 'abc' }); }); stream.end('Hello World'); });
The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-header fields (e.g.
':method'
,':path'
, etc). - encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - msecs: number,callback?: () => void): void;
import http2 from 'node:http2'; const client = http2.connect('http://example.org:8000'); const { NGHTTP2_CANCEL } = http2.constants; const req = client.request({ ':path': '/' }); // Cancel the stream if there's no activity after 5 seconds req.setTimeout(5000, () => req.close(NGHTTP2_CANCEL));
- some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.
interface IncomingHttpHeaders
interface IncomingHttpStatusHeader
interface SecureClientSessionOptions
- allowPartialTrustChain?: boolean
Treat intermediate (non-self-signed) certificates in the trust CA certificate list as trusted.
- ALPNCallback?: (arg: { protocols: string[]; servername: string }) => undefined | string
If set, this will be called when a client opens a connection using the ALPN extension. One argument will be passed to the callback: an object containing
servername
andprotocols
fields, respectively containing the server name from the SNI extension (if any) and an array of ALPN protocol name strings. The callback must return either one of the strings listed inprotocols
, which will be returned to the client as the selected ALPN protocol, orundefined
, to reject the connection with a fatal alert. If a string is returned that does not match one of the client's ALPN protocols, an error will be thrown. This option cannot be used with theALPNProtocols
option, and setting both options will throw an error. - ALPNProtocols?: Uint8Array<ArrayBufferLike> | string[] | Uint8Array<ArrayBufferLike>[]
An array of strings or a Buffer naming possible ALPN protocols. (Protocols should be ordered by their priority.)
- cert?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]
Cert chains in PEM format. One cert chain should be provided per private key. Each cert chain should consist of the PEM formatted certificate for a provided private key, followed by the PEM formatted intermediate certificates (if any), in order, and not including the root CA (the root CA must be pre-known to the peer, see ca). When providing multiple cert chains, they do not have to be in the same order as their private keys in key. If the intermediate certificates are not provided, the peer will not be able to validate the certificate, and the handshake will fail.
- ciphers?: string
Cipher suite specification, replacing the default. For more information, see modifying the default cipher suite. Permitted ciphers can be obtained via tls.getCiphers(). Cipher names must be uppercased in order for OpenSSL to accept them.
- ecdhCurve?: string
A string describing a named curve or a colon separated list of curve NIDs or names, for example P-521:P-384:P-256, to use for ECDH key agreement. Set to auto to select the curve automatically. Use crypto.getCurves() to obtain a list of available curve names. On recent releases, openssl ecparam -list_curves will also display the name and description of each available elliptic curve. Default: tls.DEFAULT_ECDH_CURVE.
- enableTrace?: boolean
When enabled, TLS packet trace information is written to
stderr
. This can be used to debug TLS connection problems. - honorCipherOrder?: boolean
Attempt to use the server's cipher suite preferences instead of the client's. When true, causes SSL_OP_CIPHER_SERVER_PREFERENCE to be set in secureOptions
- key?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | KeyObject[]
Private keys in PEM format. PEM allows the option of private keys being encrypted. Encrypted keys will be decrypted with options.passphrase. Multiple keys using different algorithms can be provided either as an array of unencrypted key strings or buffers, or an array of objects in the form {pem: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted keys will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- maxVersion?: SecureVersion
Optionally set the maximum TLS version to allow. One of
'TLSv1.3'
,'TLSv1.2'
,'TLSv1.1'
, or'TLSv1'
. Cannot be specified along with thesecureProtocol
option, use one or the other. Default:'TLSv1.3'
, unless changed using CLI options. Using--tls-max-v1.2
sets the default to'TLSv1.2'
. Using--tls-max-v1.3
sets the default to'TLSv1.3'
. If multiple of the options are provided, the highest maximum is used. - minVersion?: SecureVersion
Optionally set the minimum TLS version to allow. One of
'TLSv1.3'
,'TLSv1.2'
,'TLSv1.1'
, or'TLSv1'
. Cannot be specified along with thesecureProtocol
option, use one or the other. It is not recommended to use less than TLSv1.2, but it may be required for interoperability. Default:'TLSv1.2'
, unless changed using CLI options. Using--tls-v1.0
sets the default to'TLSv1'
. Using--tls-v1.1
sets the default to'TLSv1.1'
. Using--tls-min-v1.3
sets the default to 'TLSv1.3'. If multiple of the options are provided, the lowest minimum is used. - pfx?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | PxfObject[]
PFX or PKCS12 encoded private key and certificate chain. pfx is an alternative to providing key and cert individually. PFX is usually encrypted, if it is, passphrase will be used to decrypt it. Multiple PFX can be provided either as an array of unencrypted PFX buffers, or an array of objects in the form {buf: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted PFX will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- requestCert?: boolean
If true the server will request a certificate from clients that connect and attempt to verify that certificate. Defaults to false.
- secureOptions?: number
Optionally affect the OpenSSL protocol behavior, which is not usually necessary. This should be used carefully if at all! Value is a numeric bitmask of the SSL_OP_* options from OpenSSL Options
- secureProtocol?: string
Legacy mechanism to select the TLS protocol version to use, it does not support independent control of the minimum and maximum version, and does not support limiting the protocol to TLSv1.3. Use minVersion and maxVersion instead. The possible values are listed as SSL_METHODS, use the function names as strings. For example, use 'TLSv1_1_method' to force TLS version 1.1, or 'TLS_method' to allow any TLS protocol version up to TLSv1.3. It is not recommended to use TLS versions less than 1.2, but it may be required for interoperability. Default: none, see minVersion.
- sessionIdContext?: string
Opaque identifier used by servers to ensure session state is not shared between applications. Unused by clients.
- sessionTimeout?: number
The number of seconds after which a TLS session created by the server will no longer be resumable. See Session Resumption for more information. Default: 300.
- sigalgs?: string
Colon-separated list of supported signature algorithms. The list can contain digest algorithms (SHA256, MD5 etc.), public key algorithms (RSA-PSS, ECDSA etc.), combination of both (e.g 'RSA+SHA384') or TLS v1.3 scheme names (e.g. rsa_pss_pss_sha512).
- SNICallback?: (servername: string, cb: (err: null | Error, ctx?: SecureContext) => void) => void
SNICallback(servername, cb) <Function> A function that will be called if the client supports SNI TLS extension. Two arguments will be passed when called: servername and cb. SNICallback should invoke cb(null, ctx), where ctx is a SecureContext instance. (tls.createSecureContext(...) can be used to get a proper SecureContext.) If SNICallback wasn't provided the default callback with high-level API will be used (see below).
- ticketKeys?: Buffer<ArrayBufferLike>
48-bytes of cryptographically strong pseudo-random data. See Session Resumption for more information.
- unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'
][] is emitted. If the socket has not been destroyed by that time the server will destroy it. - hint: null | string
When negotiating TLS-PSK (pre-shared keys), this function is called with optional identity
hint
provided by the server ornull
in case of TLS 1.3 wherehint
was removed. It will be necessary to provide a customtls.checkServerIdentity()
for the connection as the default one will try to check hostname/IP of the server against the certificate but that's not applicable for PSK because there won't be a certificate present. More information can be found in the RFC 4279.@param hintmessage sent from the server to help client decide which identity to use during negotiation. Always
null
if TLS 1.3 is used.@returnsReturn
null
to stop the negotiation process.psk
must be compatible with the selected cipher's digest.identity
must use UTF-8 encoding.
interface SecureServerOptions<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
- allowPartialTrustChain?: boolean
Treat intermediate (non-self-signed) certificates in the trust CA certificate list as trusted.
- ALPNCallback?: (arg: { protocols: string[]; servername: string }) => undefined | string
If set, this will be called when a client opens a connection using the ALPN extension. One argument will be passed to the callback: an object containing
servername
andprotocols
fields, respectively containing the server name from the SNI extension (if any) and an array of ALPN protocol name strings. The callback must return either one of the strings listed inprotocols
, which will be returned to the client as the selected ALPN protocol, orundefined
, to reject the connection with a fatal alert. If a string is returned that does not match one of the client's ALPN protocols, an error will be thrown. This option cannot be used with theALPNProtocols
option, and setting both options will throw an error. - ALPNProtocols?: Uint8Array<ArrayBufferLike> | string[] | Uint8Array<ArrayBufferLike>[]
An array of strings or a Buffer naming possible ALPN protocols. (Protocols should be ordered by their priority.)
- blockList?: BlockList
blockList
can be used for disabling inbound access to specific IP addresses, IP ranges, or IP subnets. This does not work if the server is behind a reverse proxy, NAT, etc. because the address checked against the block list is the address of the proxy, or the one specified by the NAT. - cert?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]
Cert chains in PEM format. One cert chain should be provided per private key. Each cert chain should consist of the PEM formatted certificate for a provided private key, followed by the PEM formatted intermediate certificates (if any), in order, and not including the root CA (the root CA must be pre-known to the peer, see ca). When providing multiple cert chains, they do not have to be in the same order as their private keys in key. If the intermediate certificates are not provided, the peer will not be able to validate the certificate, and the handshake will fail.
- ciphers?: string
Cipher suite specification, replacing the default. For more information, see modifying the default cipher suite. Permitted ciphers can be obtained via tls.getCiphers(). Cipher names must be uppercased in order for OpenSSL to accept them.
- ecdhCurve?: string
A string describing a named curve or a colon separated list of curve NIDs or names, for example P-521:P-384:P-256, to use for ECDH key agreement. Set to auto to select the curve automatically. Use crypto.getCurves() to obtain a list of available curve names. On recent releases, openssl ecparam -list_curves will also display the name and description of each available elliptic curve. Default: tls.DEFAULT_ECDH_CURVE.
- enableTrace?: boolean
When enabled, TLS packet trace information is written to
stderr
. This can be used to debug TLS connection problems. - handshakeTimeout?: number
Abort the connection if the SSL/TLS handshake does not finish in the specified number of milliseconds. A 'tlsClientError' is emitted on the tls.Server object whenever a handshake times out. Default: 120000 (120 seconds).
- highWaterMark?: number
Optionally overrides all
net.Socket
s'readableHighWaterMark
andwritableHighWaterMark
. - honorCipherOrder?: boolean
Attempt to use the server's cipher suite preferences instead of the client's. When true, causes SSL_OP_CIPHER_SERVER_PREFERENCE to be set in secureOptions
- keepAlive?: boolean
If set to
true
, it enables keep-alive functionality on the socket immediately after a new incoming connection is received, similarly on what is done insocket.setKeepAlive([enable][, initialDelay])
. - keepAliveInitialDelay?: number
If set to a positive number, it sets the initial delay before the first keepalive probe is sent on an idle socket.
- key?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | KeyObject[]
Private keys in PEM format. PEM allows the option of private keys being encrypted. Encrypted keys will be decrypted with options.passphrase. Multiple keys using different algorithms can be provided either as an array of unencrypted key strings or buffers, or an array of objects in the form {pem: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted keys will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- maxVersion?: SecureVersion
Optionally set the maximum TLS version to allow. One of
'TLSv1.3'
,'TLSv1.2'
,'TLSv1.1'
, or'TLSv1'
. Cannot be specified along with thesecureProtocol
option, use one or the other. Default:'TLSv1.3'
, unless changed using CLI options. Using--tls-max-v1.2
sets the default to'TLSv1.2'
. Using--tls-max-v1.3
sets the default to'TLSv1.3'
. If multiple of the options are provided, the highest maximum is used. - minVersion?: SecureVersion
Optionally set the minimum TLS version to allow. One of
'TLSv1.3'
,'TLSv1.2'
,'TLSv1.1'
, or'TLSv1'
. Cannot be specified along with thesecureProtocol
option, use one or the other. It is not recommended to use less than TLSv1.2, but it may be required for interoperability. Default:'TLSv1.2'
, unless changed using CLI options. Using--tls-v1.0
sets the default to'TLSv1'
. Using--tls-v1.1
sets the default to'TLSv1.1'
. Using--tls-min-v1.3
sets the default to 'TLSv1.3'. If multiple of the options are provided, the lowest minimum is used. - noDelay?: boolean
If set to
true
, it disables the use of Nagle's algorithm immediately after a new incoming connection is received. - pfx?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | PxfObject[]
PFX or PKCS12 encoded private key and certificate chain. pfx is an alternative to providing key and cert individually. PFX is usually encrypted, if it is, passphrase will be used to decrypt it. Multiple PFX can be provided either as an array of unencrypted PFX buffers, or an array of objects in the form {buf: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted PFX will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- pskIdentityHint?: string
hint to send to a client to help with selecting the identity during TLS-PSK negotiation. Will be ignored in TLS 1.3. Upon failing to set pskIdentityHint
tlsClientError
will be emitted withERR_TLS_PSK_SET_IDENTIY_HINT_FAILED
code. - requestCert?: boolean
If true the server will request a certificate from clients that connect and attempt to verify that certificate. Defaults to false.
- secureOptions?: number
Optionally affect the OpenSSL protocol behavior, which is not usually necessary. This should be used carefully if at all! Value is a numeric bitmask of the SSL_OP_* options from OpenSSL Options
- secureProtocol?: string
Legacy mechanism to select the TLS protocol version to use, it does not support independent control of the minimum and maximum version, and does not support limiting the protocol to TLSv1.3. Use minVersion and maxVersion instead. The possible values are listed as SSL_METHODS, use the function names as strings. For example, use 'TLSv1_1_method' to force TLS version 1.1, or 'TLS_method' to allow any TLS protocol version up to TLSv1.3. It is not recommended to use TLS versions less than 1.2, but it may be required for interoperability. Default: none, see minVersion.
- sessionIdContext?: string
Opaque identifier used by servers to ensure session state is not shared between applications. Unused by clients.
- sessionTimeout?: number
The number of seconds after which a TLS session created by the server will no longer be resumable. See Session Resumption for more information. Default: 300.
- sigalgs?: string
Colon-separated list of supported signature algorithms. The list can contain digest algorithms (SHA256, MD5 etc.), public key algorithms (RSA-PSS, ECDSA etc.), combination of both (e.g 'RSA+SHA384') or TLS v1.3 scheme names (e.g. rsa_pss_pss_sha512).
- SNICallback?: (servername: string, cb: (err: null | Error, ctx?: SecureContext) => void) => void
SNICallback(servername, cb) <Function> A function that will be called if the client supports SNI TLS extension. Two arguments will be passed when called: servername and cb. SNICallback should invoke cb(null, ctx), where ctx is a SecureContext instance. (tls.createSecureContext(...) can be used to get a proper SecureContext.) If SNICallback wasn't provided the default callback with high-level API will be used (see below).
- unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'
][] is emitted. If the socket has not been destroyed by that time the server will destroy it. - identity: string): null | TypedArray<ArrayBufferLike> | DataView<ArrayBufferLike>;@param identity
identity parameter sent from the client.
@returnspre-shared key that must either be a buffer or
null
to stop the negotiation process. Returned PSK must be compatible with the selected cipher's digest.When negotiating TLS-PSK (pre-shared keys), this function is called with the identity provided by the client. If the return value is
null
the negotiation process will stop and an "unknown_psk_identity" alert message will be sent to the other party. If the server wishes to hide the fact that the PSK identity was not known, the callback must provide some random data aspsk
to make the connection fail with "decrypt_error" before negotiation is finished. PSK ciphers are disabled by default, and using TLS-PSK thus requires explicitly specifying a cipher suite with theciphers
option. More information can be found in the RFC 4279.
interface SecureServerSessionOptions<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
- allowPartialTrustChain?: boolean
Treat intermediate (non-self-signed) certificates in the trust CA certificate list as trusted.
- ALPNCallback?: (arg: { protocols: string[]; servername: string }) => undefined | string
If set, this will be called when a client opens a connection using the ALPN extension. One argument will be passed to the callback: an object containing
servername
andprotocols
fields, respectively containing the server name from the SNI extension (if any) and an array of ALPN protocol name strings. The callback must return either one of the strings listed inprotocols
, which will be returned to the client as the selected ALPN protocol, orundefined
, to reject the connection with a fatal alert. If a string is returned that does not match one of the client's ALPN protocols, an error will be thrown. This option cannot be used with theALPNProtocols
option, and setting both options will throw an error. - ALPNProtocols?: Uint8Array<ArrayBufferLike> | string[] | Uint8Array<ArrayBufferLike>[]
An array of strings or a Buffer naming possible ALPN protocols. (Protocols should be ordered by their priority.)
- blockList?: BlockList
blockList
can be used for disabling inbound access to specific IP addresses, IP ranges, or IP subnets. This does not work if the server is behind a reverse proxy, NAT, etc. because the address checked against the block list is the address of the proxy, or the one specified by the NAT. - cert?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]
Cert chains in PEM format. One cert chain should be provided per private key. Each cert chain should consist of the PEM formatted certificate for a provided private key, followed by the PEM formatted intermediate certificates (if any), in order, and not including the root CA (the root CA must be pre-known to the peer, see ca). When providing multiple cert chains, they do not have to be in the same order as their private keys in key. If the intermediate certificates are not provided, the peer will not be able to validate the certificate, and the handshake will fail.
- ciphers?: string
Cipher suite specification, replacing the default. For more information, see modifying the default cipher suite. Permitted ciphers can be obtained via tls.getCiphers(). Cipher names must be uppercased in order for OpenSSL to accept them.
- ecdhCurve?: string
A string describing a named curve or a colon separated list of curve NIDs or names, for example P-521:P-384:P-256, to use for ECDH key agreement. Set to auto to select the curve automatically. Use crypto.getCurves() to obtain a list of available curve names. On recent releases, openssl ecparam -list_curves will also display the name and description of each available elliptic curve. Default: tls.DEFAULT_ECDH_CURVE.
- enableTrace?: boolean
When enabled, TLS packet trace information is written to
stderr
. This can be used to debug TLS connection problems. - handshakeTimeout?: number
Abort the connection if the SSL/TLS handshake does not finish in the specified number of milliseconds. A 'tlsClientError' is emitted on the tls.Server object whenever a handshake times out. Default: 120000 (120 seconds).
- highWaterMark?: number
Optionally overrides all
net.Socket
s'readableHighWaterMark
andwritableHighWaterMark
. - honorCipherOrder?: boolean
Attempt to use the server's cipher suite preferences instead of the client's. When true, causes SSL_OP_CIPHER_SERVER_PREFERENCE to be set in secureOptions
- keepAlive?: boolean
If set to
true
, it enables keep-alive functionality on the socket immediately after a new incoming connection is received, similarly on what is done insocket.setKeepAlive([enable][, initialDelay])
. - keepAliveInitialDelay?: number
If set to a positive number, it sets the initial delay before the first keepalive probe is sent on an idle socket.
- key?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | KeyObject[]
Private keys in PEM format. PEM allows the option of private keys being encrypted. Encrypted keys will be decrypted with options.passphrase. Multiple keys using different algorithms can be provided either as an array of unencrypted key strings or buffers, or an array of objects in the form {pem: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted keys will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- maxVersion?: SecureVersion
Optionally set the maximum TLS version to allow. One of
'TLSv1.3'
,'TLSv1.2'
,'TLSv1.1'
, or'TLSv1'
. Cannot be specified along with thesecureProtocol
option, use one or the other. Default:'TLSv1.3'
, unless changed using CLI options. Using--tls-max-v1.2
sets the default to'TLSv1.2'
. Using--tls-max-v1.3
sets the default to'TLSv1.3'
. If multiple of the options are provided, the highest maximum is used. - minVersion?: SecureVersion
Optionally set the minimum TLS version to allow. One of
'TLSv1.3'
,'TLSv1.2'
,'TLSv1.1'
, or'TLSv1'
. Cannot be specified along with thesecureProtocol
option, use one or the other. It is not recommended to use less than TLSv1.2, but it may be required for interoperability. Default:'TLSv1.2'
, unless changed using CLI options. Using--tls-v1.0
sets the default to'TLSv1'
. Using--tls-v1.1
sets the default to'TLSv1.1'
. Using--tls-min-v1.3
sets the default to 'TLSv1.3'. If multiple of the options are provided, the lowest minimum is used. - noDelay?: boolean
If set to
true
, it disables the use of Nagle's algorithm immediately after a new incoming connection is received. - pfx?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | PxfObject[]
PFX or PKCS12 encoded private key and certificate chain. pfx is an alternative to providing key and cert individually. PFX is usually encrypted, if it is, passphrase will be used to decrypt it. Multiple PFX can be provided either as an array of unencrypted PFX buffers, or an array of objects in the form {buf: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted PFX will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- pskIdentityHint?: string
hint to send to a client to help with selecting the identity during TLS-PSK negotiation. Will be ignored in TLS 1.3. Upon failing to set pskIdentityHint
tlsClientError
will be emitted withERR_TLS_PSK_SET_IDENTIY_HINT_FAILED
code. - requestCert?: boolean
If true the server will request a certificate from clients that connect and attempt to verify that certificate. Defaults to false.
- secureOptions?: number
Optionally affect the OpenSSL protocol behavior, which is not usually necessary. This should be used carefully if at all! Value is a numeric bitmask of the SSL_OP_* options from OpenSSL Options
- secureProtocol?: string
Legacy mechanism to select the TLS protocol version to use, it does not support independent control of the minimum and maximum version, and does not support limiting the protocol to TLSv1.3. Use minVersion and maxVersion instead. The possible values are listed as SSL_METHODS, use the function names as strings. For example, use 'TLSv1_1_method' to force TLS version 1.1, or 'TLS_method' to allow any TLS protocol version up to TLSv1.3. It is not recommended to use TLS versions less than 1.2, but it may be required for interoperability. Default: none, see minVersion.
- sessionIdContext?: string
Opaque identifier used by servers to ensure session state is not shared between applications. Unused by clients.
- sessionTimeout?: number
The number of seconds after which a TLS session created by the server will no longer be resumable. See Session Resumption for more information. Default: 300.
- sigalgs?: string
Colon-separated list of supported signature algorithms. The list can contain digest algorithms (SHA256, MD5 etc.), public key algorithms (RSA-PSS, ECDSA etc.), combination of both (e.g 'RSA+SHA384') or TLS v1.3 scheme names (e.g. rsa_pss_pss_sha512).
- SNICallback?: (servername: string, cb: (err: null | Error, ctx?: SecureContext) => void) => void
SNICallback(servername, cb) <Function> A function that will be called if the client supports SNI TLS extension. Two arguments will be passed when called: servername and cb. SNICallback should invoke cb(null, ctx), where ctx is a SecureContext instance. (tls.createSecureContext(...) can be used to get a proper SecureContext.) If SNICallback wasn't provided the default callback with high-level API will be used (see below).
- unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'
][] is emitted. If the socket has not been destroyed by that time the server will destroy it. - identity: string): null | TypedArray<ArrayBufferLike> | DataView<ArrayBufferLike>;@param identity
identity parameter sent from the client.
@returnspre-shared key that must either be a buffer or
null
to stop the negotiation process. Returned PSK must be compatible with the selected cipher's digest.When negotiating TLS-PSK (pre-shared keys), this function is called with the identity provided by the client. If the return value is
null
the negotiation process will stop and an "unknown_psk_identity" alert message will be sent to the other party. If the server wishes to hide the fact that the PSK identity was not known, the callback must provide some random data aspsk
to make the connection fail with "decrypt_error" before negotiation is finished. PSK ciphers are disabled by default, and using TLS-PSK thus requires explicitly specifying a cipher suite with theciphers
option. More information can be found in the RFC 4279.
interface ServerHttp2Session<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
The
EventEmitter
class is defined and exposed by thenode:events
module:import { EventEmitter } from 'node:events';
All
EventEmitter
s emit the event'newListener'
when new listeners are added and'removeListener'
when existing listeners are removed.It supports the following option:
- readonly alpnProtocol?: string
Value will be
undefined
if theHttp2Session
is not yet connected to a socket,h2c
if theHttp2Session
is not connected to aTLSSocket
, or will return the value of the connectedTLSSocket
's ownalpnProtocol
property. - readonly closed: boolean
Will be
true
if thisHttp2Session
instance has been closed, otherwisefalse
. - readonly connecting: boolean
Will be
true
if thisHttp2Session
instance is still connecting, will be set tofalse
before emittingconnect
event and/or calling thehttp2.connect
callback. - readonly destroyed: boolean
Will be
true
if thisHttp2Session
instance has been destroyed and must no longer be used, otherwisefalse
. - readonly encrypted?: boolean
Value is
undefined
if theHttp2Session
session socket has not yet been connected,true
if theHttp2Session
is connected with aTLSSocket
, andfalse
if theHttp2Session
is connected to any other kind of socket or stream. - readonly localSettings: Settings
A prototype-less object describing the current local settings of this
Http2Session
. The local settings are local to thisHttp2Session
instance. - readonly originSet?: string[]
If the
Http2Session
is connected to aTLSSocket
, theoriginSet
property will return anArray
of origins for which theHttp2Session
may be considered authoritative.The
originSet
property is only available when using a secure TLS connection. - readonly pendingSettingsAck: boolean
Indicates whether the
Http2Session
is currently waiting for acknowledgment of a sentSETTINGS
frame. Will betrue
after calling thehttp2session.settings()
method. Will befalse
once all sentSETTINGS
frames have been acknowledged. - readonly remoteSettings: Settings
A prototype-less object describing the current remote settings of this
Http2Session
. The remote settings are set by the connected HTTP/2 peer. - readonly server: Http2Server<Http1Request, Http1Response, Http2Request, Http2Response> | Http2SecureServer<Http1Request, Http1Response, Http2Request, Http2Response>
- readonly socket: Socket | TLSSocket
Returns a
Proxy
object that acts as anet.Socket
(ortls.TLSSocket
) but limits available methods to ones safe to use with HTTP/2.destroy
,emit
,end
,pause
,read
,resume
, andwrite
will throw an error with codeERR_HTTP2_NO_SOCKET_MANIPULATION
. SeeHttp2Session and Sockets
for more information.setTimeout
method will be called on thisHttp2Session
.All other interactions will be routed directly to the socket.
- readonly state: SessionState
Provides miscellaneous information about the current state of the
Http2Session
.An object describing the current status of this
Http2Session
. - readonly type: number
The
http2session.type
will be equal tohttp2.constants.NGHTTP2_SESSION_SERVER
if thisHttp2Session
instance is a server, andhttp2.constants.NGHTTP2_SESSION_CLIENT
if the instance is a client. - event: 'connect',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void): this;
Alias for
emitter.on(eventName, listener)
.event: 'stream',): this;Alias for
emitter.on(eventName, listener)
.event: string | symbol,listener: (...args: any[]) => void): this;Alias for
emitter.on(eventName, listener)
. - alt: string,): void;
Submits an
ALTSVC
frame (as defined by RFC 7838) to the connected client.import http2 from 'node:http2'; const server = http2.createServer(); server.on('session', (session) => { // Set altsvc for origin https://example.org:80 session.altsvc('h2=":8000"', 'https://example.org:80'); }); server.on('stream', (stream) => { // Set altsvc for a specific stream stream.session.altsvc('h2=":8000"', stream.id); });
Sending an
ALTSVC
frame with a specific stream ID indicates that the alternate service is associated with the origin of the givenHttp2Stream
.The
alt
and origin string must contain only ASCII bytes and are strictly interpreted as a sequence of ASCII bytes. The special value'clear'
may be passed to clear any previously set alternative service for a given domain.When a string is passed for the
originOrStream
argument, it will be parsed as a URL and the origin will be derived. For instance, the origin for the HTTP URL'https://example.org/foo/bar'
is the ASCII string'https://example.org'
. An error will be thrown if either the given string cannot be parsed as a URL or if a valid origin cannot be derived.A
URL
object, or any object with anorigin
property, may be passed asoriginOrStream
, in which case the value of theorigin
property will be used. The value of theorigin
property must be a properly serialized ASCII origin.@param altA description of the alternative service configuration as defined by
RFC 7838
.@param originOrStreamEither a URL string specifying the origin (or an
Object
with anorigin
property) or the numeric identifier of an activeHttp2Stream
as given by thehttp2stream.id
property. - callback?: () => void): void;
Gracefully closes the
Http2Session
, allowing any existing streams to complete on their own and preventing newHttp2Stream
instances from being created. Once closed,http2session.destroy()
might be called if there are no openHttp2Stream
instances.If specified, the
callback
function is registered as a handler for the'close'
event. - code?: number): void;
Immediately terminates the
Http2Session
and the associatednet.Socket
ortls.TLSSocket
.Once destroyed, the
Http2Session
will emit the'close'
event. Iferror
is not undefined, an'error'
event will be emitted immediately before the'close'
event.If there are any remaining open
Http2Streams
associated with theHttp2Session
, those will also be destroyed.@param errorAn
Error
object if theHttp2Session
is being destroyed due to an error.@param codeThe HTTP/2 error code to send in the final
GOAWAY
frame. If unspecified, anderror
is not undefined, the default isINTERNAL_ERROR
, otherwise defaults toNO_ERROR
. - emit(event: 'connect',): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: 'stream',flags: number): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
emit(event: string | symbol,...args: any[]): boolean;Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.- code?: number,lastStreamID?: number,opaqueData?: ArrayBufferView<ArrayBufferLike>): void;
Transmits a
GOAWAY
frame to the connected peer without shutting down theHttp2Session
.@param codeAn HTTP/2 error code
@param lastStreamIDThe numeric ID of the last processed
Http2Stream
@param opaqueDataA
TypedArray
orDataView
instance containing additional data to be carried within theGOAWAY
frame. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'connect',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: 'stream',): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
on(event: string | symbol,listener: (...args: any[]) => void): this;Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'connect',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: 'stream',): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
once(event: string | symbol,listener: (...args: any[]) => void): this;Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- ): void;
Submits an
ORIGIN
frame (as defined by RFC 8336) to the connected client to advertise the set of origins for which the server is capable of providing authoritative responses.import http2 from 'node:http2'; const options = getSecureOptionsSomehow(); const server = http2.createSecureServer(options); server.on('stream', (stream) => { stream.respond(); stream.end('ok'); }); server.on('session', (session) => { session.origin('https://example.com', 'https://example.org'); });
When a string is passed as an
origin
, it will be parsed as a URL and the origin will be derived. For instance, the origin for the HTTP URL'https://example.org/foo/bar'
is the ASCII string'https://example.org'
. An error will be thrown if either the given string cannot be parsed as a URL or if a valid origin cannot be derived.A
URL
object, or any object with anorigin
property, may be passed as anorigin
, in which case the value of theorigin
property will be used. The value of theorigin
property must be a properly serialized ASCII origin.Alternatively, the
origins
option may be used when creating a new HTTP/2 server using thehttp2.createSecureServer()
method:import http2 from 'node:http2'; const options = getSecureOptionsSomehow(); options.origins = ['https://example.com', 'https://example.org']; const server = http2.createSecureServer(options); server.on('stream', (stream) => { stream.respond(); stream.end('ok'); });
@param originsOne or more URL Strings passed as separate arguments.
- ping(): boolean;
Sends a
PING
frame to the connected HTTP/2 peer. Acallback
function must be provided. The method will returntrue
if thePING
was sent,false
otherwise.The maximum number of outstanding (unacknowledged) pings is determined by the
maxOutstandingPings
configuration option. The default maximum is 10.If provided, the
payload
must be aBuffer
,TypedArray
, orDataView
containing 8 bytes of data that will be transmitted with thePING
and returned with the ping acknowledgment.The callback will be invoked with three arguments: an error argument that will be
null
if thePING
was successfully acknowledged, aduration
argument that reports the number of milliseconds elapsed since the ping was sent and the acknowledgment was received, and aBuffer
containing the 8-bytePING
payload.session.ping(Buffer.from('abcdefgh'), (err, duration, payload) => { if (!err) { console.log(`Ping acknowledged in ${duration} milliseconds`); console.log(`With payload '${payload.toString()}'`); } });
If the
payload
argument is not specified, the default payload will be the 64-bit timestamp (little endian) marking the start of thePING
duration. - event: 'connect',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'stream',): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: string | symbol,listener: (...args: any[]) => void): this;Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'connect',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'stream',): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: string | symbol,listener: (...args: any[]) => void): this;Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
Calls
ref()
on thisHttp2Session
instance's underlyingnet.Socket
.- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - eventName: string | symbol,listener: (...args: any[]) => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. - windowSize: number): void;
Sets the local endpoint's window size. The
windowSize
is the total window size to set, not the delta.import http2 from 'node:http2'; const server = http2.createServer(); const expectedWindowSize = 2 ** 20; server.on('connect', (session) => { // Set local window size to be 2 ** 20 session.setLocalWindowSize(expectedWindowSize); });
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - msecs: number,callback?: () => void): void;
Used to set a callback function that is called when there is no activity on the
Http2Session
aftermsecs
milliseconds. The givencallback
is registered as a listener on the'timeout'
event. - ): void;
Updates the current local settings for this
Http2Session
and sends a newSETTINGS
frame to the connected HTTP/2 peer.Once called, the
http2session.pendingSettingsAck
property will betrue
while the session is waiting for the remote peer to acknowledge the new settings.The new settings will not become effective until the
SETTINGS
acknowledgment is received and the'localSettings'
event is emitted. It is possible to send multipleSETTINGS
frames while acknowledgment is still pending.@param callbackCallback that is called once the session is connected or right away if the session is already connected.
Calls
unref()
on thisHttp2Session
instance's underlyingnet.Socket
.
interface ServerHttp2Stream
Duplex streams are streams that implement both the
Readable
andWritable
interfaces.Examples of
Duplex
streams include:TCP sockets
zlib streams
crypto streams
- readonly aborted: boolean
Set to
true
if theHttp2Stream
instance was aborted abnormally. When set, the'aborted'
event will have been emitted. - allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readonly bufferSize: number
This property shows the number of characters currently buffered to be written. See
net.Socket.bufferSize
for details. - readonly destroyed: boolean
Set to
true
if theHttp2Stream
instance has been destroyed and is no longer usable. - readonly endAfterHeaders: boolean
Set to
true
if theEND_STREAM
flag was set in the request or response HEADERS frame received, indicating that no additional data should be received and the readable side of theHttp2Stream
will be closed. - readonly id?: number
The numeric stream identifier of this
Http2Stream
instance. Set toundefined
if the stream identifier has not yet been assigned. - readonly pending: boolean
Set to
true
if theHttp2Stream
instance has not yet been assigned a numeric stream identifier. - readonly pushAllowed: boolean
Read-only property mapped to the
SETTINGS_ENABLE_PUSH
flag of the remote client's most recentSETTINGS
frame. Will betrue
if the remote peer accepts push streams,false
otherwise. Settings are the same for everyHttp2Stream
in the sameHttp2Session
. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly rstCode: number
Set to the
RST_STREAM
error code
reported when theHttp2Stream
is destroyed after either receiving anRST_STREAM
frame from the connected peer, callinghttp2stream.close()
, orhttp2stream.destroy()
. Will beundefined
if theHttp2Stream
has not been closed. - readonly sentHeaders: OutgoingHttpHeaders
An object containing the outbound headers sent for this
Http2Stream
. - readonly sentInfoHeaders?: OutgoingHttpHeaders[]
An array of objects containing the outbound informational (additional) headers sent for this
Http2Stream
. - readonly sentTrailers?: OutgoingHttpHeaders
An object containing the outbound trailers sent for this
HttpStream
. - readonly session: undefined | Http2Session
A reference to the
Http2Session
instance that owns thisHttp2Stream
. The value will beundefined
after theHttp2Stream
instance is destroyed. - readonly state: StreamState
Provides miscellaneous information about the current state of the
Http2Stream
.A current state of this
Http2Stream
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'
. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- ): void;
Sends an additional informational
HEADERS
frame to the connected HTTP/2 peer. - event: 'aborted',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'close',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'frameError',listener: (frameType: number, errorCode: number) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'streamClosed',listener: (code: number) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'timeout',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'trailers',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'wantTrailers',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- code?: number,callback?: () => void): void;
Closes the
Http2Stream
instance by sending anRST_STREAM
frame to the connected HTTP/2 peer.@param codeUnsigned 32-bit integer identifying the error code.
@param callbackAn optional function registered to listen for the
'close'
event. - stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'aborted'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'aborted',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'aborted',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'aborted',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'frameError',listener: (frameType: number, errorCode: number) => void): this;event: 'trailers',): this; - event: 'aborted',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
event: 'data',): this;event: 'frameError',listener: (frameType: number, errorCode: number) => void): this;event: 'trailers',): this; - ): void;
Initiates a push stream. The callback is invoked with the new
Http2Stream
instance created for the push stream passed as the second argument, or anError
passed as the first argument.import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respond({ ':status': 200 }); stream.pushStream({ ':path': '/' }, (err, pushStream, headers) => { if (err) throw err; pushStream.respond({ ':status': 200 }); pushStream.end('some pushed data'); }); stream.end('some data'); });
Setting the weight of a push stream is not allowed in the
HEADERS
frame. Pass aweight
value tohttp2stream.priority
with thesilent
option set totrue
to enable server-side bandwidth balancing between concurrent streams.Calling
http2stream.pushStream()
from within a pushed stream is not permitted and will throw an error.@param callbackCallback that is called once the push stream has been initiated.
): void; - eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. - ): void;
import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respond({ ':status': 200 }); stream.end('some data'); });
Initiates a response. When the
options.waitForTrailers
option is set, the'wantTrailers'
event will be emitted immediately after queuing the last chunk of payload data to be sent. Thehttp2stream.sendTrailers()
method can then be used to send trailing header fields to the peer.When
options.waitForTrailers
is set, theHttp2Stream
will not automatically close when the finalDATA
frame is transmitted. User code must call eitherhttp2stream.sendTrailers()
orhttp2stream.close()
to close theHttp2Stream
.import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respond({ ':status': 200 }, { waitForTrailers: true }); stream.on('wantTrailers', () => { stream.sendTrailers({ ABC: 'some value to send' }); }); stream.end('some data'); });
- ): void;
Initiates a response whose data is read from the given file descriptor. No validation is performed on the given file descriptor. If an error occurs while attempting to read data using the file descriptor, the
Http2Stream
will be closed using anRST_STREAM
frame using the standardINTERNAL_ERROR
code.When used, the
Http2Stream
object'sDuplex
interface will be closed automatically.import http2 from 'node:http2'; import fs from 'node:fs'; const server = http2.createServer(); server.on('stream', (stream) => { const fd = fs.openSync('/some/file', 'r'); const stat = fs.fstatSync(fd); const headers = { 'content-length': stat.size, 'last-modified': stat.mtime.toUTCString(), 'content-type': 'text/plain; charset=utf-8', }; stream.respondWithFD(fd, headers); stream.on('close', () => fs.closeSync(fd)); });
The optional
options.statCheck
function may be specified to give user code an opportunity to set additional content headers based on thefs.Stat
details of the given fd. If thestatCheck
function is provided, thehttp2stream.respondWithFD()
method will perform anfs.fstat()
call to collect details on the provided file descriptor.The
offset
andlength
options may be used to limit the response to a specific range subset. This can be used, for instance, to support HTTP Range requests.The file descriptor or
FileHandle
is not closed when the stream is closed, so it will need to be closed manually once it is no longer needed. Using the same file descriptor concurrently for multiple streams is not supported and may result in data loss. Re-using a file descriptor after a stream has finished is supported.When the
options.waitForTrailers
option is set, the'wantTrailers'
event will be emitted immediately after queuing the last chunk of payload data to be sent. Thehttp2stream.sendTrailers()
method can then be used to sent trailing header fields to the peer.When
options.waitForTrailers
is set, theHttp2Stream
will not automatically close when the finalDATA
frame is transmitted. User code must call eitherhttp2stream.sendTrailers()
orhttp2stream.close()
to close theHttp2Stream
.import http2 from 'node:http2'; import fs from 'node:fs'; const server = http2.createServer(); server.on('stream', (stream) => { const fd = fs.openSync('/some/file', 'r'); const stat = fs.fstatSync(fd); const headers = { 'content-length': stat.size, 'last-modified': stat.mtime.toUTCString(), 'content-type': 'text/plain; charset=utf-8', }; stream.respondWithFD(fd, headers, { waitForTrailers: true }); stream.on('wantTrailers', () => { stream.sendTrailers({ ABC: 'some value to send' }); }); stream.on('close', () => fs.closeSync(fd)); });
@param fdA readable file descriptor.
- path: string,): void;
Sends a regular file as the response. The
path
must specify a regular file or an'error'
event will be emitted on theHttp2Stream
object.When used, the
Http2Stream
object'sDuplex
interface will be closed automatically.The optional
options.statCheck
function may be specified to give user code an opportunity to set additional content headers based on thefs.Stat
details of the given file:If an error occurs while attempting to read the file data, the
Http2Stream
will be closed using anRST_STREAM
frame using the standardINTERNAL_ERROR
code. If theonError
callback is defined, then it will be called. Otherwise, the stream will be destroyed.Example using a file path:
import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { function statCheck(stat, headers) { headers['last-modified'] = stat.mtime.toUTCString(); } function onError(err) { // stream.respond() can throw if the stream has been destroyed by // the other side. try { if (err.code === 'ENOENT') { stream.respond({ ':status': 404 }); } else { stream.respond({ ':status': 500 }); } } catch (err) { // Perform actual error handling. console.error(err); } stream.end(); } stream.respondWithFile('/some/file', { 'content-type': 'text/plain; charset=utf-8' }, { statCheck, onError }); });
The
options.statCheck
function may also be used to cancel the send operation by returningfalse
. For instance, a conditional request may check the stat results to determine if the file has been modified to return an appropriate304
response:import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { function statCheck(stat, headers) { // Check the stat here... stream.respond({ ':status': 304 }); return false; // Cancel the send operation } stream.respondWithFile('/some/file', { 'content-type': 'text/plain; charset=utf-8' }, { statCheck }); });
The
content-length
header field will be automatically set.The
offset
andlength
options may be used to limit the response to a specific range subset. This can be used, for instance, to support HTTP Range requests.The
options.onError
function may also be used to handle all the errors that could happen before the delivery of the file is initiated. The default behavior is to destroy the stream.When the
options.waitForTrailers
option is set, the'wantTrailers'
event will be emitted immediately after queuing the last chunk of payload data to be sent. Thehttp2stream.sendTrailers()
method can then be used to sent trailing header fields to the peer.When
options.waitForTrailers
is set, theHttp2Stream
will not automatically close when the finalDATA
frame is transmitted. User code must call eitherhttp2stream.sendTrailers()
orhttp2stream.close()
to close theHttp2Stream
.import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respondWithFile('/some/file', { 'content-type': 'text/plain; charset=utf-8' }, { waitForTrailers: true }); stream.on('wantTrailers', () => { stream.sendTrailers({ ABC: 'some value to send' }); }); });
The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- ): void;
Sends a trailing
HEADERS
frame to the connected HTTP/2 peer. This method will cause theHttp2Stream
to be immediately closed and must only be called after the'wantTrailers'
event has been emitted. When sending a request or sending a response, theoptions.waitForTrailers
option must be set in order to keep theHttp2Stream
open after the finalDATA
frame so that trailers can be sent.import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respond(undefined, { waitForTrailers: true }); stream.on('wantTrailers', () => { stream.sendTrailers({ xyz: 'abc' }); }); stream.end('Hello World'); });
The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-header fields (e.g.
':method'
,':path'
, etc). - encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - msecs: number,callback?: () => void): void;
import http2 from 'node:http2'; const client = http2.connect('http://example.org:8000'); const { NGHTTP2_CANCEL } = http2.constants; const req = client.request({ ':path': '/' }); // Cancel the stream if there's no activity after 5 seconds req.setTimeout(5000, () => req.close(NGHTTP2_CANCEL));
- some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.
interface ServerOptions<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
- unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'
][] is emitted. If the socket has not been destroyed by that time the server will destroy it.
interface ServerSessionOptions<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
- unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'
][] is emitted. If the socket has not been destroyed by that time the server will destroy it.
interface ServerStreamFileResponseOptions
interface ServerStreamFileResponseOptionsWithError
interface ServerStreamResponseOptions
interface SessionOptions
- unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'
][] is emitted. If the socket has not been destroyed by that time the server will destroy it.
interface SessionState
interface Settings
interface StatOptions
interface StreamPriorityOptions
interface StreamState