A Http2ServerRequest object is created by Server or SecureServer and passed as the first argument to the 'request' event. It may be used to access a request status, headers, and data.
Node.js module
http2
The 'node:http2' module provides an API for HTTP/2 clients and servers, including support for multiplexing streams, HPACK header compression, and server push.
Works in Bun
Client & server are implemented (95.25% of gRPC's test suite passes). Some options, the ALTSVC extension, and server push functionality are missing.
namespace constants
class Http2ServerRequest
- readonly complete: boolean
The
request.completeproperty will betrueif the request has been completed, aborted, or destroyed. - readonly headers: IncomingHttpHeaders
The request/response headers object.
Key-value pairs of header names and values. Header names are lower-cased.
// Prints something like: // // { 'user-agent': 'curl/7.22.0', // host: '127.0.0.1:8000', // accept: '*' } console.log(request.headers);See
HTTP/2 Headers Object.In HTTP/2, the request path, host name, protocol, and method are represented as special headers prefixed with the
:character (e.g.':path'). These special headers will be included in therequest.headersobject. Care must be taken not to inadvertently modify these special headers or errors may occur. For instance, removing all headers from the request will cause errors to occur:removeAllHeaders(request.headers); assert(request.url); // Fails because the :path header has been removed - readonly httpVersion: string
In case of server request, the HTTP version sent by the client. In the case of client response, the HTTP version of the connected-to server. Returns
'2.0'.Also
message.httpVersionMajoris the first integer andmessage.httpVersionMinoris the second. - readonly rawHeaders: string[]
The raw request/response headers list exactly as they were received.
The keys and values are in the same list. It is not a list of tuples. So, the even-numbered offsets are key values, and the odd-numbered offsets are the associated values.
Header names are not lowercased, and duplicates are not merged.
// Prints something like: // // [ 'user-agent', // 'this is invalid because there can be only one', // 'User-Agent', // 'curl/7.22.0', // 'Host', // '127.0.0.1:8000', // 'ACCEPT', // '*' ] console.log(request.rawHeaders); - readonly rawTrailers: string[]
The raw request/response trailer keys and values exactly as they were received. Only populated at the
'end'event. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly scheme: string
The request scheme pseudo header field indicating the scheme portion of the target URL.
- readonly socket: Socket | TLSSocket
Returns a
Proxyobject that acts as anet.Socket(ortls.TLSSocket) but applies getters, setters, and methods based on HTTP/2 logic.destroyed,readable, andwritableproperties will be retrieved from and set onrequest.stream.destroy,emit,end,onandoncemethods will be called onrequest.stream.setTimeoutmethod will be called onrequest.stream.session.pause,read,resume, andwritewill throw an error with codeERR_HTTP2_NO_SOCKET_MANIPULATION. SeeHttp2Session and Socketsfor more information.All other interactions will be routed directly to the socket. With TLS support, use
request.socket.getPeerCertificate()to obtain the client's authentication details. - readonly trailers: IncomingHttpHeaders
The request/response trailers object. Only populated at the
'end'event. - url: string
Request URL string. This contains only the URL that is present in the actual HTTP request. If the request is:
GET /status?name=ryan HTTP/1.1 Accept: text/plainThen
request.urlwill be:'/status?name=ryan'To parse the url into its parts,
new URL()can be used:$ node > new URL('/status?name=ryan', 'http://example.com') URL { href: 'http://example.com/status?name=ryan', origin: 'http://example.com', protocol: 'http:', username: '', password: '', host: 'example.com', hostname: 'example.com', port: '', pathname: '/status', search: '?name=ryan', searchParams: URLSearchParams { 'name' => 'ryan' }, hash: '' } - static captureRejections: boolean
Value: boolean
Change the default
captureRejectionsoption on all newEventEmitterobjects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')See how to write a custom
rejection handler. - static defaultMaxListeners: number
By default, a maximum of
10listeners can be registered for any single event. This limit can be changed for individualEventEmitterinstances using theemitter.setMaxListeners(n)method. To change the default for allEventEmitterinstances, theevents.defaultMaxListenersproperty can be used. If this value is not a positive number, aRangeErroris thrown.Take caution when setting the
events.defaultMaxListenersbecause the change affects allEventEmitterinstances, including those created before the change is made. However, callingemitter.setMaxListeners(n)still has precedence overevents.defaultMaxListeners.This is not a hard limit. The
EventEmitterinstance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter, theemitter.getMaxListeners()andemitter.setMaxListeners()methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });The
--trace-warningscommand-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')and will have the additionalemitter,type, andcountproperties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsnameproperty is set to'MaxListenersExceededWarning'. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'events. Listeners installed using this symbol are called before the regular'error'listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'event is emitted. Therefore, the process will still crash if no regular'error'listener is installed. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'aborted',listener: (hadError: boolean, code: number) => void): this;
Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
event: 'close',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
event: 'data',): this;Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- end
- error
- pause
- readable
- resume
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'aborted',hadError: boolean,code: number): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'aborted',listener: (hadError: boolean, code: number) => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'aborted',listener: (hadError: boolean, code: number) => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'aborted',listener: (hadError: boolean, code: number) => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'aborted',listener: (hadError: boolean, code: number) => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'data',): this; - eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - msecs: number,callback?: () => void): void;
Sets the
Http2Stream's timeout value tomsecs. If a callback is provided, then it is added as a listener on the'timeout'event on the response object.If no
'timeout'listener is added to the request, the response, or the server, thenHttp2Streams are destroyed when they time out. If a handler is assigned to the request, the response, or the server's'timeout'events, timed out sockets must be handled explicitly. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- ): Disposable;
Listens once to the
abortevent on the providedsignal.Listening to the
abortevent on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignals in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagationdoes not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }@returnsDisposable that removes the
abortlistener. - iterable: Iterable<any, any, any> | AsyncIterable<any, any, any>,
A utility method for creating Readable Streams out of iterators.
@param iterableObject implementing the
Symbol.asyncIteratororSymbol.iteratoriterable protocol. Emits an 'error' event if a null value is passed.@param optionsOptions provided to
new stream.Readable([options]). By default,Readable.from()will setoptions.objectModetotrue, unless this is explicitly opted out by settingoptions.objectModetofalse. A utility method for creating a
Readablefrom a webReadableStream.- name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.For
EventEmitters this behaves exactly the same as calling.listenerson the emitter.For
EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] } - ): number;
Returns the currently set max amount of listeners.
For
EventEmitters this behaves exactly the same as calling.getMaxListenerson the emitter.For
EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 } - ): boolean;
Returns whether the stream has been read from or cancelled.
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemittereventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemitter - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled! - n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);@param nA non-negative number. The maximum number of listeners per
EventTargetevent.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
nis set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
ReadableStreamfrom aReadable.
class Http2ServerResponse<Request extends Http2ServerRequest = Http2ServerRequest>
This object is created internally by an HTTP server, not by the user. It is passed as the second parameter to the
'request'event.- sendDate: boolean
When true, the Date header will be automatically generated and sent in the response if it is not already present in the headers. Defaults to true.
This should only be disabled for testing; HTTP requires the Date header in responses.
- readonly socket: Socket | TLSSocket
Returns a
Proxyobject that acts as anet.Socket(ortls.TLSSocket) but applies getters, setters, and methods based on HTTP/2 logic.destroyed,readable, andwritableproperties will be retrieved from and set onresponse.stream.destroy,emit,end,onandoncemethods will be called onresponse.stream.setTimeoutmethod will be called onresponse.stream.session.pause,read,resume, andwritewill throw an error with codeERR_HTTP2_NO_SOCKET_MANIPULATION. SeeHttp2Session and Socketsfor more information.All other interactions will be routed directly to the socket.
import http2 from 'node:http2'; const server = http2.createServer((req, res) => { const ip = req.socket.remoteAddress; const port = req.socket.remotePort; res.end(`Your IP address is ${ip} and your source port is ${port}.`); }).listen(3000); - statusCode: number
When using implicit headers (not calling
response.writeHead()explicitly), this property controls the status code that will be sent to the client when the headers get flushed.response.statusCode = 404;After response header was sent to the client, this property indicates the status code which was sent out.
- statusMessage: ''
Status message is not supported by HTTP/2 (RFC 7540 8.1.2.4). It returns an empty string.
- readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. - static captureRejections: boolean
Value: boolean
Change the default
captureRejectionsoption on all newEventEmitterobjects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')See how to write a custom
rejection handler. - static defaultMaxListeners: number
By default, a maximum of
10listeners can be registered for any single event. This limit can be changed for individualEventEmitterinstances using theemitter.setMaxListeners(n)method. To change the default for allEventEmitterinstances, theevents.defaultMaxListenersproperty can be used. If this value is not a positive number, aRangeErroris thrown.Take caution when setting the
events.defaultMaxListenersbecause the change affects allEventEmitterinstances, including those created before the change is made. However, callingemitter.setMaxListeners(n)still has precedence overevents.defaultMaxListeners.This is not a hard limit. The
EventEmitterinstance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter, theemitter.getMaxListeners()andemitter.setMaxListeners()methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });The
--trace-warningscommand-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')and will have the additionalemitter,type, andcountproperties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsnameproperty is set to'MaxListenersExceededWarning'. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'events. Listeners installed using this symbol are called before the regular'error'listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'event is emitted. Therefore, the process will still crash if no regular'error'listener is installed. Calls
writable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
- ): void;
This method adds HTTP trailing headers (a header but at the end of the message) to the response.
Attempting to set a header field name or value that contains invalid characters will result in a
TypeErrorbeing thrown. - name: string,value: string | string[]): void;
Append a single header value to the header object.
If the value is an array, this is equivalent to calling this method multiple times.
If there were no previous values for the header, this is equivalent to calling setHeader.
Attempting to set a header field name or value that contains invalid characters will result in a TypeError being thrown.
// Returns headers including "set-cookie: a" and "set-cookie: b" const server = http2.createServer((req, res) => { res.setHeader('set-cookie', 'a'); res.appendHeader('set-cookie', 'b'); res.writeHead(200); res.end('ok'); }); - stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): void;
Call
http2stream.pushStream()with the given headers, and wrap the givenHttp2Streamon a newly createdHttp2ServerResponseas the callback parameter if successful. WhenHttp2ServerRequestis closed, the callback is called with an errorERR_HTTP2_INVALID_STREAM.@param headersAn object describing the headers
@param callbackCalled once
http2stream.pushStream()is finished, or either when the attempt to create the pushedHttp2Streamhas failed or has been rejected, or the state ofHttp2ServerRequestis closed prior to calling thehttp2stream.pushStream()method - ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the writable stream has ended and subsequent calls towrite()orend()will result in anERR_STREAM_DESTROYEDerror. This is a destructive and immediate way to destroy a stream. Previous calls towrite()may not have drained, and may trigger anERR_STREAM_DESTROYEDerror. Useend()instead of destroy if data should flush before close, or wait for the'drain'event before destroying the stream.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
writable._destroy().@param errorOptional, an error to emit with
'error'event. - emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(callback?: () => void): this;
This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method,
response.end(), MUST be called on each response.If
datais specified, it is equivalent to callingresponse.write(data, encoding)followed byresponse.end(callback).If
callbackis specified, it will be called when the response stream is finished.end(callback?: () => void): this;This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method,
response.end(), MUST be called on each response.If
datais specified, it is equivalent to callingresponse.write(data, encoding)followed byresponse.end(callback).If
callbackis specified, it will be called when the response stream is finished.end(encoding: BufferEncoding,callback?: () => void): this;This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method,
response.end(), MUST be called on each response.If
datais specified, it is equivalent to callingresponse.write(data, encoding)followed byresponse.end(callback).If
callbackis specified, it will be called when the response stream is finished. Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- name: string): string;
Reads out a header that has already been queued but not sent to the client. The name is case-insensitive.
const contentType = response.getHeader('content-type'); Returns an array containing the unique names of the current outgoing headers. All header names are lowercase.
response.setHeader('Foo', 'bar'); response.setHeader('Set-Cookie', ['foo=bar', 'bar=baz']); const headerNames = response.getHeaderNames(); // headerNames === ['foo', 'set-cookie']Returns a shallow copy of the current outgoing headers. Since a shallow copy is used, array values may be mutated without additional calls to various header-related http module methods. The keys of the returned object are the header names and the values are the respective header values. All header names are lowercase.
The object returned by the
response.getHeaders()method does not prototypically inherit from the JavaScriptObject. This means that typicalObjectmethods such asobj.toString(),obj.hasOwnProperty(), and others are not defined and will not work.response.setHeader('Foo', 'bar'); response.setHeader('Set-Cookie', ['foo=bar', 'bar=baz']); const headers = response.getHeaders(); // headers === { foo: 'bar', 'set-cookie': ['foo=bar', 'bar=baz'] }Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.- name: string): boolean;
Returns
trueif the header identified bynameis currently set in the outgoing headers. The header name matching is case-insensitive.const hasContentType = response.hasHeader('content-type'); - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - name: string): void;
Removes a header that has been queued for implicit sending.
response.removeHeader('Content-Encoding'); - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. - encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- name: string,value: string | number | readonly string[]): void;
Sets a single header value for implicit headers. If this header already exists in the to-be-sent headers, its value will be replaced. Use an array of strings here to send multiple headers with the same name.
response.setHeader('Content-Type', 'text/html; charset=utf-8');or
response.setHeader('Set-Cookie', ['type=ninja', 'language=javascript']);Attempting to set a header field name or value that contains invalid characters will result in a
TypeErrorbeing thrown.When headers have been set with
response.setHeader(), they will be merged with any headers passed toresponse.writeHead(), with the headers passed toresponse.writeHead()given precedence.// Returns content-type = text/plain const server = http2.createServer((req, res) => { res.setHeader('Content-Type', 'text/html; charset=utf-8'); res.setHeader('X-Foo', 'bar'); res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' }); res.end('ok'); }); - n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - msecs: number,callback?: () => void): void;
Sets the
Http2Stream's timeout value tomsecs. If a callback is provided, then it is added as a listener on the'timeout'event on the response object.If no
'timeout'listener is added to the request, the response, or the server, thenHttp2Streams are destroyed when they time out. If a handler is assigned to the request, the response, or the server's'timeout'events, timed out sockets must be handled explicitly. The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- ): boolean;
If this method is called and
response.writeHead()has not been called, it will switch to implicit header mode and flush the implicit headers.This sends a chunk of the response body. This method may be called multiple times to provide successive parts of the body.
In the
node:httpmodule, the response body is omitted when the request is a HEAD request. Similarly, the204and304responses must not include a message body.chunkcan be a string or a buffer. Ifchunkis a string, the second parameter specifies how to encode it into a byte stream. By default theencodingis'utf8'.callbackwill be called when this chunk of data is flushed.This is the raw HTTP body and has nothing to do with higher-level multi-part body encodings that may be used.
The first time
response.write()is called, it will send the buffered header information and the first chunk of the body to the client. The second timeresponse.write()is called, Node.js assumes data will be streamed, and sends the new data separately. That is, the response is buffered up to the first chunk of the body.Returns
trueif the entire data was flushed successfully to the kernel buffer. Returnsfalseif all or part of the data was queued in user memory.'drain'will be emitted when the buffer is free again.encoding: BufferEncoding,): boolean;If this method is called and
response.writeHead()has not been called, it will switch to implicit header mode and flush the implicit headers.This sends a chunk of the response body. This method may be called multiple times to provide successive parts of the body.
In the
node:httpmodule, the response body is omitted when the request is a HEAD request. Similarly, the204and304responses must not include a message body.chunkcan be a string or a buffer. Ifchunkis a string, the second parameter specifies how to encode it into a byte stream. By default theencodingis'utf8'.callbackwill be called when this chunk of data is flushed.This is the raw HTTP body and has nothing to do with higher-level multi-part body encodings that may be used.
The first time
response.write()is called, it will send the buffered header information and the first chunk of the body to the client. The second timeresponse.write()is called, Node.js assumes data will be streamed, and sends the new data separately. That is, the response is buffered up to the first chunk of the body.Returns
trueif the entire data was flushed successfully to the kernel buffer. Returnsfalseif all or part of the data was queued in user memory.'drain'will be emitted when the buffer is free again. Sends a status
100 Continueto the client, indicating that the request body should be sent. See the'checkContinue'event onHttp2ServerandHttp2SecureServer.- hints: Record<string, string | string[]>): void;
Sends a status
103 Early Hintsto the client with a Link header, indicating that the user agent can preload/preconnect the linked resources. Thehintsis an object containing the values of headers to be sent with early hints message.Example
const earlyHintsLink = '</styles.css>; rel=preload; as=style'; response.writeEarlyHints({ 'link': earlyHintsLink, }); const earlyHintsLinks = [ '</styles.css>; rel=preload; as=style', '</scripts.js>; rel=preload; as=script', ]; response.writeEarlyHints({ 'link': earlyHintsLinks, }); - statusCode: number,): this;
Sends a response header to the request. The status code is a 3-digit HTTP status code, like
404. The last argument,headers, are the response headers.Returns a reference to the
Http2ServerResponse, so that calls can be chained.For compatibility with
HTTP/1, a human-readablestatusMessagemay be passed as the second argument. However, because thestatusMessagehas no meaning within HTTP/2, the argument will have no effect and a process warning will be emitted.const body = 'hello world'; response.writeHead(200, { 'Content-Length': Buffer.byteLength(body), 'Content-Type': 'text/plain; charset=utf-8', });Content-Lengthis given in bytes not characters. TheBuffer.byteLength()API may be used to determine the number of bytes in a given encoding. On outbound messages, Node.js does not check if Content-Length and the length of the body being transmitted are equal or not. However, when receiving messages, Node.js will automatically reject messages when theContent-Lengthdoes not match the actual payload size.This method may be called at most one time on a message before
response.end()is called.If
response.write()orresponse.end()are called before calling this, the implicit/mutable headers will be calculated and call this function.When headers have been set with
response.setHeader(), they will be merged with any headers passed toresponse.writeHead(), with the headers passed toresponse.writeHead()given precedence.// Returns content-type = text/plain const server = http2.createServer((req, res) => { res.setHeader('Content-Type', 'text/html; charset=utf-8'); res.setHeader('X-Foo', 'bar'); res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' }); res.end('ok'); });Attempting to set a header field name or value that contains invalid characters will result in a
TypeErrorbeing thrown.statusCode: number,statusMessage: string,): this;Sends a response header to the request. The status code is a 3-digit HTTP status code, like
404. The last argument,headers, are the response headers.Returns a reference to the
Http2ServerResponse, so that calls can be chained.For compatibility with
HTTP/1, a human-readablestatusMessagemay be passed as the second argument. However, because thestatusMessagehas no meaning within HTTP/2, the argument will have no effect and a process warning will be emitted.const body = 'hello world'; response.writeHead(200, { 'Content-Length': Buffer.byteLength(body), 'Content-Type': 'text/plain; charset=utf-8', });Content-Lengthis given in bytes not characters. TheBuffer.byteLength()API may be used to determine the number of bytes in a given encoding. On outbound messages, Node.js does not check if Content-Length and the length of the body being transmitted are equal or not. However, when receiving messages, Node.js will automatically reject messages when theContent-Lengthdoes not match the actual payload size.This method may be called at most one time on a message before
response.end()is called.If
response.write()orresponse.end()are called before calling this, the implicit/mutable headers will be calculated and call this function.When headers have been set with
response.setHeader(), they will be merged with any headers passed toresponse.writeHead(), with the headers passed toresponse.writeHead()given precedence.// Returns content-type = text/plain const server = http2.createServer((req, res) => { res.setHeader('Content-Type', 'text/html; charset=utf-8'); res.setHeader('X-Foo', 'bar'); res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' }); res.end('ok'); });Attempting to set a header field name or value that contains invalid characters will result in a
TypeErrorbeing thrown. - ): Disposable;
Listens once to the
abortevent on the providedsignal.Listening to the
abortevent on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignals in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagationdoes not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }@returnsDisposable that removes the
abortlistener. - options?: Pick<WritableOptions<Writable>, 'signal' | 'decodeStrings' | 'highWaterMark' | 'objectMode'>
A utility method for creating a
Writablefrom a webWritableStream. - name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.For
EventEmitters this behaves exactly the same as calling.listenerson the emitter.For
EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] } - ): number;
Returns the currently set max amount of listeners.
For
EventEmitters this behaves exactly the same as calling.getMaxListenerson the emitter.For
EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 } - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemittereventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemitter - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled! - n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);@param nA non-negative number. The maximum number of listeners per
EventTargetevent.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
nis set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
WritableStreamfrom aWritable.
This symbol can be set as a property on the HTTP/2 headers object with an array value in order to provide a list of headers considered sensitive.
Returns a
ClientHttp2Sessioninstance.import http2 from 'node:http2'; const client = http2.connect('https://localhost:1234'); // Use the client client.close();@param authorityThe remote HTTP/2 server to connect to. This must be in the form of a minimal, valid URL with the
http://orhttps://prefix, host name, and IP port (if a non-default port is used). Userinfo (user ID and password), path, querystring, and fragment details in the URL will be ignored.@param listenerWill be registered as a one-time listener of the 'connect' event.
Returns a
ClientHttp2Sessioninstance.import http2 from 'node:http2'; const client = http2.connect('https://localhost:1234'); // Use the client client.close();@param authorityThe remote HTTP/2 server to connect to. This must be in the form of a minimal, valid URL with the
http://orhttps://prefix, host name, and IP port (if a non-default port is used). Userinfo (user ID and password), path, querystring, and fragment details in the URL will be ignored.@param listenerWill be registered as a one-time listener of the 'connect' event.
Returns a
tls.Serverinstance that creates and managesHttp2Sessioninstances.import http2 from 'node:http2'; import fs from 'node:fs'; const options = { key: fs.readFileSync('server-key.pem'), cert: fs.readFileSync('server-cert.pem'), }; // Create a secure HTTP/2 server const server = http2.createSecureServer(options); server.on('stream', (stream, headers) => { stream.respond({ 'content-type': 'text/html; charset=utf-8', ':status': 200, }); stream.end('<h1>Hello World</h1>'); }); server.listen(8443);@param onRequestHandlerSee
Compatibility APIfunction createSecureServer<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>(onRequestHandler?: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => voidReturns a
tls.Serverinstance that creates and managesHttp2Sessioninstances.import http2 from 'node:http2'; import fs from 'node:fs'; const options = { key: fs.readFileSync('server-key.pem'), cert: fs.readFileSync('server-cert.pem'), }; // Create a secure HTTP/2 server const server = http2.createSecureServer(options); server.on('stream', (stream, headers) => { stream.respond({ 'content-type': 'text/html; charset=utf-8', ':status': 200, }); stream.end('<h1>Hello World</h1>'); }); server.listen(8443);@param onRequestHandlerSee
Compatibility APIReturns a
net.Serverinstance that creates and managesHttp2Sessioninstances.Since there are no browsers known that support unencrypted HTTP/2, the use of createSecureServer is necessary when communicating with browser clients.
import http2 from 'node:http2'; // Create an unencrypted HTTP/2 server. // Since there are no browsers known that support // unencrypted HTTP/2, the use of `http2.createSecureServer()` // is necessary when communicating with browser clients. const server = http2.createServer(); server.on('stream', (stream, headers) => { stream.respond({ 'content-type': 'text/html; charset=utf-8', ':status': 200, }); stream.end('<h1>Hello World</h1>'); }); server.listen(8000);@param onRequestHandlerSee
Compatibility APIfunction createServer<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>(onRequestHandler?: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => voidReturns a
net.Serverinstance that creates and managesHttp2Sessioninstances.Since there are no browsers known that support unencrypted HTTP/2, the use of createSecureServer is necessary when communicating with browser clients.
import http2 from 'node:http2'; // Create an unencrypted HTTP/2 server. // Since there are no browsers known that support // unencrypted HTTP/2, the use of `http2.createSecureServer()` // is necessary when communicating with browser clients. const server = http2.createServer(); server.on('stream', (stream, headers) => { stream.respond({ 'content-type': 'text/html; charset=utf-8', ':status': 200, }); stream.end('<h1>Hello World</h1>'); }); server.listen(8000);@param onRequestHandlerSee
Compatibility APIReturns an object containing the default settings for an
Http2Sessioninstance. This method returns a new object instance every time it is called so instances returned may be safely modified for use.Returns a
Bufferinstance containing serialized representation of the given HTTP/2 settings as specified in the HTTP/2 specification. This is intended for use with theHTTP2-Settingsheader field.import http2 from 'node:http2'; const packed = http2.getPackedSettings({ enablePush: false }); console.log(packed.toString('base64')); // Prints: AAIAAAAAReturns a
HTTP/2 Settings Objectcontaining the deserialized settings from the givenBufferas generated byhttp2.getPackedSettings().@param bufThe packed settings.
- function performServerHandshake<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>(
Create an HTTP/2 server session from an existing socket.
@param socketA Duplex Stream
@param optionsAny
{@link createServer}options can be provided.
Type definitions
interface AlternativeServiceOptions
interface ClientHttp2Session
The
EventEmitterclass is defined and exposed by thenode:eventsmodule:import { EventEmitter } from 'node:events';All
EventEmitters emit the event'newListener'when new listeners are added and'removeListener'when existing listeners are removed.It supports the following option:
- readonly alpnProtocol?: string
Value will be
undefinedif theHttp2Sessionis not yet connected to a socket,h2cif theHttp2Sessionis not connected to aTLSSocket, or will return the value of the connectedTLSSocket's ownalpnProtocolproperty. - readonly closed: boolean
Will be
trueif thisHttp2Sessioninstance has been closed, otherwisefalse. - readonly connecting: boolean
Will be
trueif thisHttp2Sessioninstance is still connecting, will be set tofalsebefore emittingconnectevent and/or calling thehttp2.connectcallback. - readonly destroyed: boolean
Will be
trueif thisHttp2Sessioninstance has been destroyed and must no longer be used, otherwisefalse. - readonly encrypted?: boolean
Value is
undefinedif theHttp2Sessionsession socket has not yet been connected,trueif theHttp2Sessionis connected with aTLSSocket, andfalseif theHttp2Sessionis connected to any other kind of socket or stream. - readonly localSettings: Settings
A prototype-less object describing the current local settings of this
Http2Session. The local settings are local to thisHttp2Sessioninstance. - readonly originSet?: string[]
If the
Http2Sessionis connected to aTLSSocket, theoriginSetproperty will return anArrayof origins for which theHttp2Sessionmay be considered authoritative.The
originSetproperty is only available when using a secure TLS connection. - readonly pendingSettingsAck: boolean
Indicates whether the
Http2Sessionis currently waiting for acknowledgment of a sentSETTINGSframe. Will betrueafter calling thehttp2session.settings()method. Will befalseonce all sentSETTINGSframes have been acknowledged. - readonly remoteSettings: Settings
A prototype-less object describing the current remote settings of this
Http2Session. The remote settings are set by the connected HTTP/2 peer. - readonly socket: Socket | TLSSocket
Returns a
Proxyobject that acts as anet.Socket(ortls.TLSSocket) but limits available methods to ones safe to use with HTTP/2.destroy,emit,end,pause,read,resume, andwritewill throw an error with codeERR_HTTP2_NO_SOCKET_MANIPULATION. SeeHttp2Session and Socketsfor more information.setTimeoutmethod will be called on thisHttp2Session.All other interactions will be routed directly to the socket.
- readonly state: SessionState
Provides miscellaneous information about the current state of the
Http2Session.An object describing the current status of this
Http2Session. - readonly type: number
The
http2session.typewill be equal tohttp2.constants.NGHTTP2_SESSION_SERVERif thisHttp2Sessioninstance is a server, andhttp2.constants.NGHTTP2_SESSION_CLIENTif the instance is a client. - event: 'altsvc',listener: (alt: string, origin: string, stream: number) => void): this;
Alias for
emitter.on(eventName, listener).event: 'origin',listener: (origins: string[]) => void): this;Alias for
emitter.on(eventName, listener).event: 'connect',): this;Alias for
emitter.on(eventName, listener).event: 'stream',listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this;Alias for
emitter.on(eventName, listener).event: string | symbol,listener: (...args: any[]) => void): this;Alias for
emitter.on(eventName, listener). - callback?: () => void): void;
Gracefully closes the
Http2Session, allowing any existing streams to complete on their own and preventing newHttp2Streaminstances from being created. Once closed,http2session.destroy()might be called if there are no openHttp2Streaminstances.If specified, the
callbackfunction is registered as a handler for the'close'event. - code?: number): void;
Immediately terminates the
Http2Sessionand the associatednet.Socketortls.TLSSocket.Once destroyed, the
Http2Sessionwill emit the'close'event. Iferroris not undefined, an'error'event will be emitted immediately before the'close'event.If there are any remaining open
Http2Streamsassociated with theHttp2Session, those will also be destroyed.@param errorAn
Errorobject if theHttp2Sessionis being destroyed due to an error.@param codeThe HTTP/2 error code to send in the final
GOAWAYframe. If unspecified, anderroris not undefined, the default isINTERNAL_ERROR, otherwise defaults toNO_ERROR. - emit(event: 'altsvc',alt: string,origin: string,stream: number): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'origin',origins: readonly string[]): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'connect',): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'stream',flags: number): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: string | symbol,...args: any[]): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.- code?: number,lastStreamID?: number,opaqueData?: ArrayBufferView<ArrayBufferLike>): void;
Transmits a
GOAWAYframe to the connected peer without shutting down theHttp2Session.@param codeAn HTTP/2 error code
@param lastStreamIDThe numeric ID of the last processed
Http2Stream@param opaqueDataA
TypedArrayorDataViewinstance containing additional data to be carried within theGOAWAYframe. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'altsvc',listener: (alt: string, origin: string, stream: number) => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'origin',listener: (origins: string[]) => void): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'connect',): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'stream',listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: string | symbol,listener: (...args: any[]) => void): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'altsvc',listener: (alt: string, origin: string, stream: number) => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'origin',listener: (origins: string[]) => void): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'connect',): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'stream',listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: string | symbol,listener: (...args: any[]) => void): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- ping(): boolean;
Sends a
PINGframe to the connected HTTP/2 peer. Acallbackfunction must be provided. The method will returntrueif thePINGwas sent,falseotherwise.The maximum number of outstanding (unacknowledged) pings is determined by the
maxOutstandingPingsconfiguration option. The default maximum is 10.If provided, the
payloadmust be aBuffer,TypedArray, orDataViewcontaining 8 bytes of data that will be transmitted with thePINGand returned with the ping acknowledgment.The callback will be invoked with three arguments: an error argument that will be
nullif thePINGwas successfully acknowledged, adurationargument that reports the number of milliseconds elapsed since the ping was sent and the acknowledgment was received, and aBuffercontaining the 8-bytePINGpayload.session.ping(Buffer.from('abcdefgh'), (err, duration, payload) => { if (!err) { console.log(`Ping acknowledged in ${duration} milliseconds`); console.log(`With payload '${payload.toString()}'`); } });If the
payloadargument is not specified, the default payload will be the 64-bit timestamp (little endian) marking the start of thePINGduration. - event: 'altsvc',listener: (alt: string, origin: string, stream: number) => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'origin',listener: (origins: string[]) => void): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'connect',): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'stream',listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: string | symbol,listener: (...args: any[]) => void): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'altsvc',listener: (alt: string, origin: string, stream: number) => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'origin',listener: (origins: string[]) => void): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'connect',): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'stream',listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: string | symbol,listener: (...args: any[]) => void): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); Calls
ref()on thisHttp2Sessioninstance's underlyingnet.Socket.- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - eventName: string | symbol,listener: (...args: any[]) => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. For HTTP/2 Client
Http2Sessioninstances only, thehttp2session.request()creates and returns anHttp2Streaminstance that can be used to send an HTTP/2 request to the connected server.When a
ClientHttp2Sessionis first created, the socket may not yet be connected. ifclienthttp2session.request()is called during this time, the actual request will be deferred until the socket is ready to go. If thesessionis closed before the actual request be executed, anERR_HTTP2_GOAWAY_SESSIONis thrown.This method is only available if
http2session.typeis equal tohttp2.constants.NGHTTP2_SESSION_CLIENT.import http2 from 'node:http2'; const clientSession = http2.connect('https://localhost:1234'); const { HTTP2_HEADER_PATH, HTTP2_HEADER_STATUS, } = http2.constants; const req = clientSession.request({ [HTTP2_HEADER_PATH]: '/' }); req.on('response', (headers) => { console.log(headers[HTTP2_HEADER_STATUS]); req.on('data', (chunk) => { // .. }); req.on('end', () => { // .. }); });When the
options.waitForTrailersoption is set, the'wantTrailers'event is emitted immediately after queuing the last chunk of payload data to be sent. Thehttp2stream.sendTrailers()method can then be called to send trailing headers to the peer.When
options.waitForTrailersis set, theHttp2Streamwill not automatically close when the finalDATAframe is transmitted. User code must call eitherhttp2stream.sendTrailers()orhttp2stream.close()to close theHttp2Stream.When
options.signalis set with anAbortSignaland thenaborton the correspondingAbortControlleris called, the request will emit an'error'event with anAbortErrorerror.The
:methodand:pathpseudo-headers are not specified withinheaders, they respectively default to::method='GET':path=/
- windowSize: number): void;
Sets the local endpoint's window size. The
windowSizeis the total window size to set, not the delta.import http2 from 'node:http2'; const server = http2.createServer(); const expectedWindowSize = 2 ** 20; server.on('connect', (session) => { // Set local window size to be 2 ** 20 session.setLocalWindowSize(expectedWindowSize); }); - n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - msecs: number,callback?: () => void): void;
Used to set a callback function that is called when there is no activity on the
Http2Sessionaftermsecsmilliseconds. The givencallbackis registered as a listener on the'timeout'event. - ): void;
Updates the current local settings for this
Http2Sessionand sends a newSETTINGSframe to the connected HTTP/2 peer.Once called, the
http2session.pendingSettingsAckproperty will betruewhile the session is waiting for the remote peer to acknowledge the new settings.The new settings will not become effective until the
SETTINGSacknowledgment is received and the'localSettings'event is emitted. It is possible to send multipleSETTINGSframes while acknowledgment is still pending.@param callbackCallback that is called once the session is connected or right away if the session is already connected.
Calls
unref()on thisHttp2Sessioninstance's underlyingnet.Socket.
interface ClientHttp2Stream
Duplex streams are streams that implement both the
ReadableandWritableinterfaces.Examples of
Duplexstreams include:TCP socketszlib streamscrypto streams
- readonly aborted: boolean
Set to
trueif theHttp2Streaminstance was aborted abnormally. When set, the'aborted'event will have been emitted. - allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readonly bufferSize: number
This property shows the number of characters currently buffered to be written. See
net.Socket.bufferSizefor details. - readonly destroyed: boolean
Set to
trueif theHttp2Streaminstance has been destroyed and is no longer usable. - readonly endAfterHeaders: boolean
Set to
trueif theEND_STREAMflag was set in the request or response HEADERS frame received, indicating that no additional data should be received and the readable side of theHttp2Streamwill be closed. - readonly id?: number
The numeric stream identifier of this
Http2Streaminstance. Set toundefinedif the stream identifier has not yet been assigned. - readonly pending: boolean
Set to
trueif theHttp2Streaminstance has not yet been assigned a numeric stream identifier. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly rstCode: number
Set to the
RST_STREAMerror codereported when theHttp2Streamis destroyed after either receiving anRST_STREAMframe from the connected peer, callinghttp2stream.close(), orhttp2stream.destroy(). Will beundefinedif theHttp2Streamhas not been closed. - readonly sentHeaders: OutgoingHttpHeaders
An object containing the outbound headers sent for this
Http2Stream. - readonly sentInfoHeaders?: OutgoingHttpHeaders[]
An array of objects containing the outbound informational (additional) headers sent for this
Http2Stream. - readonly sentTrailers?: OutgoingHttpHeaders
An object containing the outbound trailers sent for this
HttpStream. - readonly session: undefined | Http2Session
A reference to the
Http2Sessioninstance that owns thisHttp2Stream. The value will beundefinedafter theHttp2Streaminstance is destroyed. - readonly state: StreamState
Provides miscellaneous information about the current state of the
Http2Stream.A current state of this
Http2Stream. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'continue',listener: () => {}): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'headers',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'push',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'response',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- code?: number,callback?: () => void): void;
Closes the
Http2Streaminstance by sending anRST_STREAMframe to the connected HTTP/2 peer.@param codeUnsigned 32-bit integer identifying the error code.
@param callbackAn optional function registered to listen for the
'close'event. - stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'continue'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'continue',listener: () => {}): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'continue',listener: () => {}): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'continue',listener: () => {}): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'headers',): this;event: 'push',): this;event: 'response',): this; - event: 'continue',listener: () => {}): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'headers',): this;event: 'push',): this;event: 'response',): this; - eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- ): void;
Sends a trailing
HEADERSframe to the connected HTTP/2 peer. This method will cause theHttp2Streamto be immediately closed and must only be called after the'wantTrailers'event has been emitted. When sending a request or sending a response, theoptions.waitForTrailersoption must be set in order to keep theHttp2Streamopen after the finalDATAframe so that trailers can be sent.import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respond(undefined, { waitForTrailers: true }); stream.on('wantTrailers', () => { stream.sendTrailers({ xyz: 'abc' }); }); stream.end('Hello World'); });The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-header fields (e.g.
':method',':path', etc). - encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - msecs: number,callback?: () => void): void;
import http2 from 'node:http2'; const client = http2.connect('http://example.org:8000'); const { NGHTTP2_CANCEL } = http2.constants; const req = client.request({ ':path': '/' }); // Cancel the stream if there's no activity after 5 seconds req.setTimeout(5000, () => req.close(NGHTTP2_CANCEL)); - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface ClientSessionOptions
- createConnection?: (authority: URL, option: SessionOptions) => Duplex
An optional callback that receives the
URLinstance passed toconnectand theoptionsobject, and returns anyDuplexstream that is to be used as the connection for this session. - maxHeaderListPairs?: number
Sets the maximum number of header entries. This is similar to
server.maxHeadersCountorrequest.maxHeadersCountin thenode:httpmodule. The minimum value is1. - maxReservedRemoteStreams?: number
Sets the maximum number of reserved push streams the client will accept at any given time. Once the current number of currently reserved push streams exceeds reaches this limit, new push streams sent by the server will be automatically rejected. The minimum allowed value is 0. The maximum allowed value is 2<sup>32</sup>-1. A negative value sets this option to the maximum allowed value.
- maxSendHeaderBlockLength?: number
Sets the maximum allowed size for a serialized, compressed block of headers. Attempts to send headers that exceed this limit will result in a
'frameError'event being emitted and the stream being closed and destroyed. - maxSessionMemory?: number
Sets the maximum memory that the
Http2Sessionis permitted to use. The value is expressed in terms of number of megabytes, e.g.1equal 1 megabyte. The minimum value allowed is1. This is a credit based limit, existingHttp2Streams may cause this limit to be exceeded, but newHttp2Streaminstances will be rejected while this limit is exceeded. The current number ofHttp2Streamsessions, the current memory use of the header compression tables, current data queued to be sent, and unacknowledgedPINGandSETTINGSframes are all counted towards the current limit. - maxSettings?: number
Sets the maximum number of settings entries per
SETTINGSframe. The minimum value allowed is1. - paddingStrategy?: number
Strategy used for determining the amount of padding to use for
HEADERSandDATAframes. - peerMaxConcurrentStreams?: number
Sets the maximum number of concurrent streams for the remote peer as if a
SETTINGSframe had been received. Will be overridden if the remote peer sets its own value formaxConcurrentStreams. - protocol?: 'http:' | 'https:'
The protocol to connect with, if not set in the
authority. Value may be either'http:'or'https:'. - remoteCustomSettings?: number[]
The array of integer values determines the settings types, which are included in the
CustomSettings-property of the received remoteSettings. Please see theCustomSettings-property of theHttp2Settingsobject for more information, on the allowed setting types. - strictFieldWhitespaceValidation?: boolean
If
true, it turns on strict leading and trailing whitespace validation for HTTP/2 header field names and values as per RFC-9113. - unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'][] is emitted. If the socket has not been destroyed by that time the server will destroy it.
interface ClientSessionRequestOptions
interface Http2SecureServer<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
Accepts encrypted connections using TLS or SSL.
- maxConnections: number
Set this property to reject connections when the server's connection count gets high.
It is not recommended to use this option once a socket has been sent to a child with
child_process.fork(). Calls () and returns a promise that fulfills when the server has closed.
- hostname: string,): void;
The
server.addContext()method adds a secure context that will be used if the client request's SNI name matches the suppliedhostname(or wildcard).When there are multiple matching contexts, the most recently added one is used.
@param hostnameA SNI host name or wildcard (e.g.
'*')@param contextAn object containing any of the possible properties from the createSecureContext
optionsarguments (e.g.key,cert,ca, etc), or a TLS context object created with createSecureContext itself. - event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: 'sessionError',): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: 'stream',): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: 'timeout',listener: () => void): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: 'unknownProtocol',): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
event: string | symbol,listener: (...args: any[]) => void): this;events.EventEmitter
- tlsClientError
- newSession
- OCSPRequest
- resumeSession
- secureConnection
- keylog
Returns the bound
address, the addressfamilyname, andportof the server as reported by the operating system if listening on an IP socket (useful to find which port was assigned when getting an OS-assigned address):{ port: 12346, family: 'IPv4', address: '127.0.0.1' }.For a server listening on a pipe or Unix domain socket, the name is returned as a string.
const server = net.createServer((socket) => { socket.end('goodbye\n'); }).on('error', (err) => { // Handle errors here. throw err; }); // Grab an arbitrary unused port. server.listen(() => { console.log('opened server on', server.address()); });server.address()returnsnullbefore the'listening'event has been emitted or after callingserver.close().- ): this;
Stops the server from accepting new connections and keeps existing connections. This function is asynchronous, the server is finally closed when all connections are ended and the server emits a
'close'event. The optionalcallbackwill be called once the'close'event occurs. Unlike that event, it will be called with anErroras its only argument if the server was not open when it was closed.@param callbackCalled when the server is closed.
- emit(event: 'checkContinue',request: InstanceType<Http2Request>,response: InstanceType<Http2Response>): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'request',request: InstanceType<Http2Request>,response: InstanceType<Http2Response>): boolean; Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): this;
Asynchronously get the number of concurrent connections on the server. Works when sockets were sent to forks.
Callback should take two arguments
errandcount. Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.Returns the session ticket keys.
See
Session Resumptionfor more information.@returnsA 48-byte buffer containing the session ticket keys.
- port?: number,hostname?: string,backlog?: number,listeningListener?: () => void): this;
Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });port?: number,hostname?: string,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });port?: number,backlog?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });port?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });path: string,backlog?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });path: string,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });handle: any,backlog?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });handle: any,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } }); - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;on(event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this; - once(event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;once(event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this; - event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this;event: 'stream',): this; - event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this;event: 'stream',): this; - eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); Opposite of
unref(), callingref()on a previouslyunrefed server will not let the program exit if it's the only server left (the default behavior). If the server isrefed callingref()again will have no effect.- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - eventName: string | symbol,listener: (...args: any[]) => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. - n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - ): void;
The
server.setSecureContext()method replaces the secure context of an existing server. Existing connections to the server are not interrupted.@param optionsAn object containing any of the possible properties from the createSecureContext
optionsarguments (e.g.key,cert,ca, etc). - ): void;
Sets the session ticket keys.
Changes to the ticket keys are effective only for future server connections. Existing or currently pending server connections will use the previous keys.
See
Session Resumptionfor more information.@param keysA 48-byte buffer containing the session ticket keys.
Calling
unref()on a server will allow the program to exit if this is the only active server in the event system. If the server is alreadyunrefed callingunref()again will have no effect.- ): void;
Throws ERR_HTTP2_INVALID_SETTING_VALUE for invalid settings values. Throws ERR_INVALID_ARG_TYPE for invalid settings argument.
interface Http2Server<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
This class is used to create a TCP or
IPCserver.- maxConnections: number
Set this property to reject connections when the server's connection count gets high.
It is not recommended to use this option once a socket has been sent to a child with
child_process.fork(). Calls () and returns a promise that fulfills when the server has closed.
- event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
events.EventEmitter
- close
- connection
- error
- listening
- drop
event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;events.EventEmitter
- close
- connection
- error
- listening
- drop
event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this;events.EventEmitter
- close
- connection
- error
- listening
- drop
event: 'sessionError',): this;events.EventEmitter
- close
- connection
- error
- listening
- drop
event: 'stream',): this;events.EventEmitter
- close
- connection
- error
- listening
- drop
event: 'timeout',listener: () => void): this;events.EventEmitter
- close
- connection
- error
- listening
- drop
event: string | symbol,listener: (...args: any[]) => void): this;events.EventEmitter
- close
- connection
- error
- listening
- drop
Returns the bound
address, the addressfamilyname, andportof the server as reported by the operating system if listening on an IP socket (useful to find which port was assigned when getting an OS-assigned address):{ port: 12346, family: 'IPv4', address: '127.0.0.1' }.For a server listening on a pipe or Unix domain socket, the name is returned as a string.
const server = net.createServer((socket) => { socket.end('goodbye\n'); }).on('error', (err) => { // Handle errors here. throw err; }); // Grab an arbitrary unused port. server.listen(() => { console.log('opened server on', server.address()); });server.address()returnsnullbefore the'listening'event has been emitted or after callingserver.close().- ): this;
Stops the server from accepting new connections and keeps existing connections. This function is asynchronous, the server is finally closed when all connections are ended and the server emits a
'close'event. The optionalcallbackwill be called once the'close'event occurs. Unlike that event, it will be called with anErroras its only argument if the server was not open when it was closed.@param callbackCalled when the server is closed.
- emit(event: 'checkContinue',request: InstanceType<Http2Request>,response: InstanceType<Http2Response>): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'request',request: InstanceType<Http2Request>,response: InstanceType<Http2Response>): boolean; Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): this;
Asynchronously get the number of concurrent connections on the server. Works when sockets were sent to forks.
Callback should take two arguments
errandcount. Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.- port?: number,hostname?: string,backlog?: number,listeningListener?: () => void): this;
Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });port?: number,hostname?: string,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });port?: number,backlog?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });port?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });path: string,backlog?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });path: string,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });handle: any,backlog?: number,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } });handle: any,listeningListener?: () => void): this;Start a server listening for connections. A
net.Servercan be a TCP or anIPCserver depending on what it listens to.Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPCserversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the
'listening'event will be emitted. The last parametercallbackwill be added as a listener for the'listening'event.All
listen()methods can take abacklogparameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such astcp_max_syn_backlogandsomaxconnon Linux. The default value of this parameter is 511 (not 512).All Socket are set to
SO_REUSEADDR(seesocket(7)for details).The
server.listen()method can be called again if and only if there was an error during the firstserver.listen()call orserver.close()has been called. Otherwise, anERR_SERVER_ALREADY_LISTENerror will be thrown.One of the most common errors raised when listening is
EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:server.on('error', (e) => { if (e.code === 'EADDRINUSE') { console.error('Address in use, retrying...'); setTimeout(() => { server.close(); server.listen(PORT, HOST); }, 1000); } }); - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;on(event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this; - once(event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;once(event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this; - event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this;event: 'stream',): this; - event: 'checkContinue',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'request',listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void): this;event: 'session',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void): this;event: 'stream',): this; - eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); Opposite of
unref(), callingref()on a previouslyunrefed server will not let the program exit if it's the only server left (the default behavior). If the server isrefed callingref()again will have no effect.- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - eventName: string | symbol,listener: (...args: any[]) => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. - n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. Calling
unref()on a server will allow the program to exit if this is the only active server in the event system. If the server is alreadyunrefed callingunref()again will have no effect.- ): void;
Throws ERR_HTTP2_INVALID_SETTING_VALUE for invalid settings values. Throws ERR_INVALID_ARG_TYPE for invalid settings argument.
interface Http2Session
The
EventEmitterclass is defined and exposed by thenode:eventsmodule:import { EventEmitter } from 'node:events';All
EventEmitters emit the event'newListener'when new listeners are added and'removeListener'when existing listeners are removed.It supports the following option:
- readonly alpnProtocol?: string
Value will be
undefinedif theHttp2Sessionis not yet connected to a socket,h2cif theHttp2Sessionis not connected to aTLSSocket, or will return the value of the connectedTLSSocket's ownalpnProtocolproperty. - readonly closed: boolean
Will be
trueif thisHttp2Sessioninstance has been closed, otherwisefalse. - readonly connecting: boolean
Will be
trueif thisHttp2Sessioninstance is still connecting, will be set tofalsebefore emittingconnectevent and/or calling thehttp2.connectcallback. - readonly destroyed: boolean
Will be
trueif thisHttp2Sessioninstance has been destroyed and must no longer be used, otherwisefalse. - readonly encrypted?: boolean
Value is
undefinedif theHttp2Sessionsession socket has not yet been connected,trueif theHttp2Sessionis connected with aTLSSocket, andfalseif theHttp2Sessionis connected to any other kind of socket or stream. - readonly localSettings: Settings
A prototype-less object describing the current local settings of this
Http2Session. The local settings are local to thisHttp2Sessioninstance. - readonly originSet?: string[]
If the
Http2Sessionis connected to aTLSSocket, theoriginSetproperty will return anArrayof origins for which theHttp2Sessionmay be considered authoritative.The
originSetproperty is only available when using a secure TLS connection. - readonly pendingSettingsAck: boolean
Indicates whether the
Http2Sessionis currently waiting for acknowledgment of a sentSETTINGSframe. Will betrueafter calling thehttp2session.settings()method. Will befalseonce all sentSETTINGSframes have been acknowledged. - readonly remoteSettings: Settings
A prototype-less object describing the current remote settings of this
Http2Session. The remote settings are set by the connected HTTP/2 peer. - readonly socket: Socket | TLSSocket
Returns a
Proxyobject that acts as anet.Socket(ortls.TLSSocket) but limits available methods to ones safe to use with HTTP/2.destroy,emit,end,pause,read,resume, andwritewill throw an error with codeERR_HTTP2_NO_SOCKET_MANIPULATION. SeeHttp2Session and Socketsfor more information.setTimeoutmethod will be called on thisHttp2Session.All other interactions will be routed directly to the socket.
- readonly state: SessionState
Provides miscellaneous information about the current state of the
Http2Session.An object describing the current status of this
Http2Session. - readonly type: number
The
http2session.typewill be equal tohttp2.constants.NGHTTP2_SESSION_SERVERif thisHttp2Sessioninstance is a server, andhttp2.constants.NGHTTP2_SESSION_CLIENTif the instance is a client. - event: 'error',): this;
Alias for
emitter.on(eventName, listener).event: 'frameError',listener: (frameType: number, errorCode: number, streamID: number) => void): this;Alias for
emitter.on(eventName, listener).event: 'goaway',): this;Alias for
emitter.on(eventName, listener).event: 'localSettings',): this;Alias for
emitter.on(eventName, listener).event: 'remoteSettings',): this;Alias for
emitter.on(eventName, listener).event: string | symbol,listener: (...args: any[]) => void): this;Alias for
emitter.on(eventName, listener). - callback?: () => void): void;
Gracefully closes the
Http2Session, allowing any existing streams to complete on their own and preventing newHttp2Streaminstances from being created. Once closed,http2session.destroy()might be called if there are no openHttp2Streaminstances.If specified, the
callbackfunction is registered as a handler for the'close'event. - code?: number): void;
Immediately terminates the
Http2Sessionand the associatednet.Socketortls.TLSSocket.Once destroyed, the
Http2Sessionwill emit the'close'event. Iferroris not undefined, an'error'event will be emitted immediately before the'close'event.If there are any remaining open
Http2Streamsassociated with theHttp2Session, those will also be destroyed.@param errorAn
Errorobject if theHttp2Sessionis being destroyed due to an error.@param codeThe HTTP/2 error code to send in the final
GOAWAYframe. If unspecified, anderroris not undefined, the default isINTERNAL_ERROR, otherwise defaults toNO_ERROR. - emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'error',): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'frameError',frameType: number,errorCode: number,streamID: number): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'goaway',errorCode: number,lastStreamID: number,): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'localSettings',): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'ping'): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'remoteSettings',): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'timeout'): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: string | symbol,...args: any[]): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.- code?: number,lastStreamID?: number,opaqueData?: ArrayBufferView<ArrayBufferLike>): void;
Transmits a
GOAWAYframe to the connected peer without shutting down theHttp2Session.@param codeAn HTTP/2 error code
@param lastStreamIDThe numeric ID of the last processed
Http2Stream@param opaqueDataA
TypedArrayorDataViewinstance containing additional data to be carried within theGOAWAYframe. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'error',): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'frameError',listener: (frameType: number, errorCode: number, streamID: number) => void): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'goaway',): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'localSettings',): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'ping',listener: () => void): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'remoteSettings',): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'timeout',listener: () => void): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: string | symbol,listener: (...args: any[]) => void): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'error',): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'frameError',listener: (frameType: number, errorCode: number, streamID: number) => void): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'goaway',): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'localSettings',): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'ping',listener: () => void): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'remoteSettings',): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'timeout',listener: () => void): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: string | symbol,listener: (...args: any[]) => void): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- ping(): boolean;
Sends a
PINGframe to the connected HTTP/2 peer. Acallbackfunction must be provided. The method will returntrueif thePINGwas sent,falseotherwise.The maximum number of outstanding (unacknowledged) pings is determined by the
maxOutstandingPingsconfiguration option. The default maximum is 10.If provided, the
payloadmust be aBuffer,TypedArray, orDataViewcontaining 8 bytes of data that will be transmitted with thePINGand returned with the ping acknowledgment.The callback will be invoked with three arguments: an error argument that will be
nullif thePINGwas successfully acknowledged, adurationargument that reports the number of milliseconds elapsed since the ping was sent and the acknowledgment was received, and aBuffercontaining the 8-bytePINGpayload.session.ping(Buffer.from('abcdefgh'), (err, duration, payload) => { if (!err) { console.log(`Ping acknowledged in ${duration} milliseconds`); console.log(`With payload '${payload.toString()}'`); } });If the
payloadargument is not specified, the default payload will be the 64-bit timestamp (little endian) marking the start of thePINGduration. - event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'error',): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'frameError',listener: (frameType: number, errorCode: number, streamID: number) => void): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'goaway',): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'localSettings',): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'ping',listener: () => void): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'remoteSettings',): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'timeout',listener: () => void): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: string | symbol,listener: (...args: any[]) => void): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'error',): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'frameError',listener: (frameType: number, errorCode: number, streamID: number) => void): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'goaway',): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'localSettings',): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'ping',listener: () => void): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'remoteSettings',): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'timeout',listener: () => void): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: string | symbol,listener: (...args: any[]) => void): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); Calls
ref()on thisHttp2Sessioninstance's underlyingnet.Socket.- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - eventName: string | symbol,listener: (...args: any[]) => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. - windowSize: number): void;
Sets the local endpoint's window size. The
windowSizeis the total window size to set, not the delta.import http2 from 'node:http2'; const server = http2.createServer(); const expectedWindowSize = 2 ** 20; server.on('connect', (session) => { // Set local window size to be 2 ** 20 session.setLocalWindowSize(expectedWindowSize); }); - n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - msecs: number,callback?: () => void): void;
Used to set a callback function that is called when there is no activity on the
Http2Sessionaftermsecsmilliseconds. The givencallbackis registered as a listener on the'timeout'event. - ): void;
Updates the current local settings for this
Http2Sessionand sends a newSETTINGSframe to the connected HTTP/2 peer.Once called, the
http2session.pendingSettingsAckproperty will betruewhile the session is waiting for the remote peer to acknowledge the new settings.The new settings will not become effective until the
SETTINGSacknowledgment is received and the'localSettings'event is emitted. It is possible to send multipleSETTINGSframes while acknowledgment is still pending.@param callbackCallback that is called once the session is connected or right away if the session is already connected.
Calls
unref()on thisHttp2Sessioninstance's underlyingnet.Socket.
interface Http2Stream
Duplex streams are streams that implement both the
ReadableandWritableinterfaces.Examples of
Duplexstreams include:TCP socketszlib streamscrypto streams
- readonly aborted: boolean
Set to
trueif theHttp2Streaminstance was aborted abnormally. When set, the'aborted'event will have been emitted. - allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readonly bufferSize: number
This property shows the number of characters currently buffered to be written. See
net.Socket.bufferSizefor details. - readonly destroyed: boolean
Set to
trueif theHttp2Streaminstance has been destroyed and is no longer usable. - readonly endAfterHeaders: boolean
Set to
trueif theEND_STREAMflag was set in the request or response HEADERS frame received, indicating that no additional data should be received and the readable side of theHttp2Streamwill be closed. - readonly id?: number
The numeric stream identifier of this
Http2Streaminstance. Set toundefinedif the stream identifier has not yet been assigned. - readonly pending: boolean
Set to
trueif theHttp2Streaminstance has not yet been assigned a numeric stream identifier. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly rstCode: number
Set to the
RST_STREAMerror codereported when theHttp2Streamis destroyed after either receiving anRST_STREAMframe from the connected peer, callinghttp2stream.close(), orhttp2stream.destroy(). Will beundefinedif theHttp2Streamhas not been closed. - readonly sentHeaders: OutgoingHttpHeaders
An object containing the outbound headers sent for this
Http2Stream. - readonly sentInfoHeaders?: OutgoingHttpHeaders[]
An array of objects containing the outbound informational (additional) headers sent for this
Http2Stream. - readonly sentTrailers?: OutgoingHttpHeaders
An object containing the outbound trailers sent for this
HttpStream. - readonly session: undefined | Http2Session
A reference to the
Http2Sessioninstance that owns thisHttp2Stream. The value will beundefinedafter theHttp2Streaminstance is destroyed. - readonly state: StreamState
Provides miscellaneous information about the current state of the
Http2Stream.A current state of this
Http2Stream. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'aborted',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'close',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'frameError',listener: (frameType: number, errorCode: number) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'streamClosed',listener: (code: number) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'timeout',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'trailers',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'wantTrailers',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- code?: number,callback?: () => void): void;
Closes the
Http2Streaminstance by sending anRST_STREAMframe to the connected HTTP/2 peer.@param codeUnsigned 32-bit integer identifying the error code.
@param callbackAn optional function registered to listen for the
'close'event. - stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'aborted'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'aborted',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'aborted',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'aborted',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'frameError',listener: (frameType: number, errorCode: number) => void): this;event: 'trailers',): this; - event: 'aborted',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'data',): this;event: 'frameError',listener: (frameType: number, errorCode: number) => void): this;event: 'trailers',): this; - eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- ): void;
Sends a trailing
HEADERSframe to the connected HTTP/2 peer. This method will cause theHttp2Streamto be immediately closed and must only be called after the'wantTrailers'event has been emitted. When sending a request or sending a response, theoptions.waitForTrailersoption must be set in order to keep theHttp2Streamopen after the finalDATAframe so that trailers can be sent.import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respond(undefined, { waitForTrailers: true }); stream.on('wantTrailers', () => { stream.sendTrailers({ xyz: 'abc' }); }); stream.end('Hello World'); });The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-header fields (e.g.
':method',':path', etc). - encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - msecs: number,callback?: () => void): void;
import http2 from 'node:http2'; const client = http2.connect('http://example.org:8000'); const { NGHTTP2_CANCEL } = http2.constants; const req = client.request({ ':path': '/' }); // Cancel the stream if there's no activity after 5 seconds req.setTimeout(5000, () => req.close(NGHTTP2_CANCEL)); - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface IncomingHttpHeaders
interface IncomingHttpStatusHeader
interface SecureClientSessionOptions
- allowPartialTrustChain?: boolean
Treat intermediate (non-self-signed) certificates in the trust CA certificate list as trusted.
- ALPNCallback?: (arg: { protocols: string[]; servername: string }) => undefined | string
If set, this will be called when a client opens a connection using the ALPN extension. One argument will be passed to the callback: an object containing
servernameandprotocolsfields, respectively containing the server name from the SNI extension (if any) and an array of ALPN protocol name strings. The callback must return either one of the strings listed inprotocols, which will be returned to the client as the selected ALPN protocol, orundefined, to reject the connection with a fatal alert. If a string is returned that does not match one of the client's ALPN protocols, an error will be thrown. This option cannot be used with theALPNProtocolsoption, and setting both options will throw an error. - ALPNProtocols?: Uint8Array<ArrayBufferLike> | string[] | Uint8Array<ArrayBufferLike>[]
An array of strings or a Buffer naming possible ALPN protocols. (Protocols should be ordered by their priority.)
- cert?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]
Cert chains in PEM format. One cert chain should be provided per private key. Each cert chain should consist of the PEM formatted certificate for a provided private key, followed by the PEM formatted intermediate certificates (if any), in order, and not including the root CA (the root CA must be pre-known to the peer, see ca). When providing multiple cert chains, they do not have to be in the same order as their private keys in key. If the intermediate certificates are not provided, the peer will not be able to validate the certificate, and the handshake will fail.
- ciphers?: string
Cipher suite specification, replacing the default. For more information, see modifying the default cipher suite. Permitted ciphers can be obtained via tls.getCiphers(). Cipher names must be uppercased in order for OpenSSL to accept them.
- createConnection?: (authority: URL, option: SessionOptions) => Duplex
An optional callback that receives the
URLinstance passed toconnectand theoptionsobject, and returns anyDuplexstream that is to be used as the connection for this session. - ecdhCurve?: string
A string describing a named curve or a colon separated list of curve NIDs or names, for example P-521:P-384:P-256, to use for ECDH key agreement. Set to auto to select the curve automatically. Use crypto.getCurves() to obtain a list of available curve names. On recent releases, openssl ecparam -list_curves will also display the name and description of each available elliptic curve. Default: tls.DEFAULT_ECDH_CURVE.
- enableTrace?: boolean
When enabled, TLS packet trace information is written to
stderr. This can be used to debug TLS connection problems. - honorCipherOrder?: boolean
Attempt to use the server's cipher suite preferences instead of the client's. When true, causes SSL_OP_CIPHER_SERVER_PREFERENCE to be set in secureOptions
- key?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | KeyObject[]
Private keys in PEM format. PEM allows the option of private keys being encrypted. Encrypted keys will be decrypted with options.passphrase. Multiple keys using different algorithms can be provided either as an array of unencrypted key strings or buffers, or an array of objects in the form {pem: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted keys will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- maxHeaderListPairs?: number
Sets the maximum number of header entries. This is similar to
server.maxHeadersCountorrequest.maxHeadersCountin thenode:httpmodule. The minimum value is1. - maxReservedRemoteStreams?: number
Sets the maximum number of reserved push streams the client will accept at any given time. Once the current number of currently reserved push streams exceeds reaches this limit, new push streams sent by the server will be automatically rejected. The minimum allowed value is 0. The maximum allowed value is 2<sup>32</sup>-1. A negative value sets this option to the maximum allowed value.
- maxSendHeaderBlockLength?: number
Sets the maximum allowed size for a serialized, compressed block of headers. Attempts to send headers that exceed this limit will result in a
'frameError'event being emitted and the stream being closed and destroyed. - maxSessionMemory?: number
Sets the maximum memory that the
Http2Sessionis permitted to use. The value is expressed in terms of number of megabytes, e.g.1equal 1 megabyte. The minimum value allowed is1. This is a credit based limit, existingHttp2Streams may cause this limit to be exceeded, but newHttp2Streaminstances will be rejected while this limit is exceeded. The current number ofHttp2Streamsessions, the current memory use of the header compression tables, current data queued to be sent, and unacknowledgedPINGandSETTINGSframes are all counted towards the current limit. - maxSettings?: number
Sets the maximum number of settings entries per
SETTINGSframe. The minimum value allowed is1. - maxVersion?: SecureVersion
Optionally set the maximum TLS version to allow. One of
'TLSv1.3','TLSv1.2','TLSv1.1', or'TLSv1'. Cannot be specified along with thesecureProtocoloption, use one or the other. Default:'TLSv1.3', unless changed using CLI options. Using--tls-max-v1.2sets the default to'TLSv1.2'. Using--tls-max-v1.3sets the default to'TLSv1.3'. If multiple of the options are provided, the highest maximum is used. - minVersion?: SecureVersion
Optionally set the minimum TLS version to allow. One of
'TLSv1.3','TLSv1.2','TLSv1.1', or'TLSv1'. Cannot be specified along with thesecureProtocoloption, use one or the other. It is not recommended to use less than TLSv1.2, but it may be required for interoperability. Default:'TLSv1.2', unless changed using CLI options. Using--tls-v1.0sets the default to'TLSv1'. Using--tls-v1.1sets the default to'TLSv1.1'. Using--tls-min-v1.3sets the default to 'TLSv1.3'. If multiple of the options are provided, the lowest minimum is used. - paddingStrategy?: number
Strategy used for determining the amount of padding to use for
HEADERSandDATAframes. - peerMaxConcurrentStreams?: number
Sets the maximum number of concurrent streams for the remote peer as if a
SETTINGSframe had been received. Will be overridden if the remote peer sets its own value formaxConcurrentStreams. - pfx?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | PxfObject[]
PFX or PKCS12 encoded private key and certificate chain. pfx is an alternative to providing key and cert individually. PFX is usually encrypted, if it is, passphrase will be used to decrypt it. Multiple PFX can be provided either as an array of unencrypted PFX buffers, or an array of objects in the form {buf: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted PFX will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- protocol?: 'http:' | 'https:'
The protocol to connect with, if not set in the
authority. Value may be either'http:'or'https:'. - pskCallback?: (hint: null | string) => null | PSKCallbackNegotation
When negotiating TLS-PSK (pre-shared keys), this function is called with optional identity
hintprovided by the server ornullin case of TLS 1.3 wherehintwas removed. It will be necessary to provide a customtls.checkServerIdentity()for the connection as the default one will try to check hostname/IP of the server against the certificate but that's not applicable for PSK because there won't be a certificate present. More information can be found in the RFC 4279. - remoteCustomSettings?: number[]
The array of integer values determines the settings types, which are included in the
CustomSettings-property of the received remoteSettings. Please see theCustomSettings-property of theHttp2Settingsobject for more information, on the allowed setting types. - requestCert?: boolean
If true the server will request a certificate from clients that connect and attempt to verify that certificate. Defaults to false.
- secureOptions?: number
Optionally affect the OpenSSL protocol behavior, which is not usually necessary. This should be used carefully if at all! Value is a numeric bitmask of the SSL_OP_* options from OpenSSL Options
- secureProtocol?: string
Legacy mechanism to select the TLS protocol version to use, it does not support independent control of the minimum and maximum version, and does not support limiting the protocol to TLSv1.3. Use minVersion and maxVersion instead. The possible values are listed as SSL_METHODS, use the function names as strings. For example, use 'TLSv1_1_method' to force TLS version 1.1, or 'TLS_method' to allow any TLS protocol version up to TLSv1.3. It is not recommended to use TLS versions less than 1.2, but it may be required for interoperability. Default: none, see minVersion.
- sessionIdContext?: string
Opaque identifier used by servers to ensure session state is not shared between applications. Unused by clients.
- sessionTimeout?: number
The number of seconds after which a TLS session created by the server will no longer be resumable. See Session Resumption for more information. Default: 300.
- sigalgs?: string
Colon-separated list of supported signature algorithms. The list can contain digest algorithms (SHA256, MD5 etc.), public key algorithms (RSA-PSS, ECDSA etc.), combination of both (e.g 'RSA+SHA384') or TLS v1.3 scheme names (e.g. rsa_pss_pss_sha512).
- SNICallback?: (servername: string, cb: (err: null | Error, ctx?: SecureContext) => void) => void
SNICallback(servername, cb) <Function> A function that will be called if the client supports SNI TLS extension. Two arguments will be passed when called: servername and cb. SNICallback should invoke cb(null, ctx), where ctx is a SecureContext instance. (tls.createSecureContext(...) can be used to get a proper SecureContext.) If SNICallback wasn't provided the default callback with high-level API will be used (see below).
- strictFieldWhitespaceValidation?: boolean
If
true, it turns on strict leading and trailing whitespace validation for HTTP/2 header field names and values as per RFC-9113. - ticketKeys?: Buffer<ArrayBufferLike>
48-bytes of cryptographically strong pseudo-random data. See Session Resumption for more information.
- unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'][] is emitted. If the socket has not been destroyed by that time the server will destroy it.
interface SecureServerOptions<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
- allowPartialTrustChain?: boolean
Treat intermediate (non-self-signed) certificates in the trust CA certificate list as trusted.
- ALPNCallback?: (arg: { protocols: string[]; servername: string }) => undefined | string
If set, this will be called when a client opens a connection using the ALPN extension. One argument will be passed to the callback: an object containing
servernameandprotocolsfields, respectively containing the server name from the SNI extension (if any) and an array of ALPN protocol name strings. The callback must return either one of the strings listed inprotocols, which will be returned to the client as the selected ALPN protocol, orundefined, to reject the connection with a fatal alert. If a string is returned that does not match one of the client's ALPN protocols, an error will be thrown. This option cannot be used with theALPNProtocolsoption, and setting both options will throw an error. - ALPNProtocols?: Uint8Array<ArrayBufferLike> | string[] | Uint8Array<ArrayBufferLike>[]
An array of strings or a Buffer naming possible ALPN protocols. (Protocols should be ordered by their priority.)
- blockList?: BlockList
blockListcan be used for disabling inbound access to specific IP addresses, IP ranges, or IP subnets. This does not work if the server is behind a reverse proxy, NAT, etc. because the address checked against the block list is the address of the proxy, or the one specified by the NAT. - cert?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]
Cert chains in PEM format. One cert chain should be provided per private key. Each cert chain should consist of the PEM formatted certificate for a provided private key, followed by the PEM formatted intermediate certificates (if any), in order, and not including the root CA (the root CA must be pre-known to the peer, see ca). When providing multiple cert chains, they do not have to be in the same order as their private keys in key. If the intermediate certificates are not provided, the peer will not be able to validate the certificate, and the handshake will fail.
- ciphers?: string
Cipher suite specification, replacing the default. For more information, see modifying the default cipher suite. Permitted ciphers can be obtained via tls.getCiphers(). Cipher names must be uppercased in order for OpenSSL to accept them.
- ecdhCurve?: string
A string describing a named curve or a colon separated list of curve NIDs or names, for example P-521:P-384:P-256, to use for ECDH key agreement. Set to auto to select the curve automatically. Use crypto.getCurves() to obtain a list of available curve names. On recent releases, openssl ecparam -list_curves will also display the name and description of each available elliptic curve. Default: tls.DEFAULT_ECDH_CURVE.
- enableTrace?: boolean
When enabled, TLS packet trace information is written to
stderr. This can be used to debug TLS connection problems. - handshakeTimeout?: number
Abort the connection if the SSL/TLS handshake does not finish in the specified number of milliseconds. A 'tlsClientError' is emitted on the tls.Server object whenever a handshake times out. Default: 120000 (120 seconds).
- highWaterMark?: number
Optionally overrides all
net.Sockets'readableHighWaterMarkandwritableHighWaterMark. - honorCipherOrder?: boolean
Attempt to use the server's cipher suite preferences instead of the client's. When true, causes SSL_OP_CIPHER_SERVER_PREFERENCE to be set in secureOptions
- keepAlive?: boolean
If set to
true, it enables keep-alive functionality on the socket immediately after a new incoming connection is received, similarly on what is done insocket.setKeepAlive([enable][, initialDelay]). - keepAliveInitialDelay?: number
If set to a positive number, it sets the initial delay before the first keepalive probe is sent on an idle socket.
- key?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | KeyObject[]
Private keys in PEM format. PEM allows the option of private keys being encrypted. Encrypted keys will be decrypted with options.passphrase. Multiple keys using different algorithms can be provided either as an array of unencrypted key strings or buffers, or an array of objects in the form {pem: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted keys will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- maxHeaderListPairs?: number
Sets the maximum number of header entries. This is similar to
server.maxHeadersCountorrequest.maxHeadersCountin thenode:httpmodule. The minimum value is1. - maxSendHeaderBlockLength?: number
Sets the maximum allowed size for a serialized, compressed block of headers. Attempts to send headers that exceed this limit will result in a
'frameError'event being emitted and the stream being closed and destroyed. - maxSessionMemory?: number
Sets the maximum memory that the
Http2Sessionis permitted to use. The value is expressed in terms of number of megabytes, e.g.1equal 1 megabyte. The minimum value allowed is1. This is a credit based limit, existingHttp2Streams may cause this limit to be exceeded, but newHttp2Streaminstances will be rejected while this limit is exceeded. The current number ofHttp2Streamsessions, the current memory use of the header compression tables, current data queued to be sent, and unacknowledgedPINGandSETTINGSframes are all counted towards the current limit. - maxSettings?: number
Sets the maximum number of settings entries per
SETTINGSframe. The minimum value allowed is1. - maxVersion?: SecureVersion
Optionally set the maximum TLS version to allow. One of
'TLSv1.3','TLSv1.2','TLSv1.1', or'TLSv1'. Cannot be specified along with thesecureProtocoloption, use one or the other. Default:'TLSv1.3', unless changed using CLI options. Using--tls-max-v1.2sets the default to'TLSv1.2'. Using--tls-max-v1.3sets the default to'TLSv1.3'. If multiple of the options are provided, the highest maximum is used. - minVersion?: SecureVersion
Optionally set the minimum TLS version to allow. One of
'TLSv1.3','TLSv1.2','TLSv1.1', or'TLSv1'. Cannot be specified along with thesecureProtocoloption, use one or the other. It is not recommended to use less than TLSv1.2, but it may be required for interoperability. Default:'TLSv1.2', unless changed using CLI options. Using--tls-v1.0sets the default to'TLSv1'. Using--tls-v1.1sets the default to'TLSv1.1'. Using--tls-min-v1.3sets the default to 'TLSv1.3'. If multiple of the options are provided, the lowest minimum is used. - noDelay?: boolean
If set to
true, it disables the use of Nagle's algorithm immediately after a new incoming connection is received. - paddingStrategy?: number
Strategy used for determining the amount of padding to use for
HEADERSandDATAframes. - peerMaxConcurrentStreams?: number
Sets the maximum number of concurrent streams for the remote peer as if a
SETTINGSframe had been received. Will be overridden if the remote peer sets its own value formaxConcurrentStreams. - pfx?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | PxfObject[]
PFX or PKCS12 encoded private key and certificate chain. pfx is an alternative to providing key and cert individually. PFX is usually encrypted, if it is, passphrase will be used to decrypt it. Multiple PFX can be provided either as an array of unencrypted PFX buffers, or an array of objects in the form {buf: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted PFX will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- pskIdentityHint?: string
hint to send to a client to help with selecting the identity during TLS-PSK negotiation. Will be ignored in TLS 1.3. Upon failing to set pskIdentityHint
tlsClientErrorwill be emitted withERR_TLS_PSK_SET_IDENTIY_HINT_FAILEDcode. - remoteCustomSettings?: number[]
The array of integer values determines the settings types, which are included in the
CustomSettings-property of the received remoteSettings. Please see theCustomSettings-property of theHttp2Settingsobject for more information, on the allowed setting types. - requestCert?: boolean
If true the server will request a certificate from clients that connect and attempt to verify that certificate. Defaults to false.
- secureOptions?: number
Optionally affect the OpenSSL protocol behavior, which is not usually necessary. This should be used carefully if at all! Value is a numeric bitmask of the SSL_OP_* options from OpenSSL Options
- secureProtocol?: string
Legacy mechanism to select the TLS protocol version to use, it does not support independent control of the minimum and maximum version, and does not support limiting the protocol to TLSv1.3. Use minVersion and maxVersion instead. The possible values are listed as SSL_METHODS, use the function names as strings. For example, use 'TLSv1_1_method' to force TLS version 1.1, or 'TLS_method' to allow any TLS protocol version up to TLSv1.3. It is not recommended to use TLS versions less than 1.2, but it may be required for interoperability. Default: none, see minVersion.
- sessionIdContext?: string
Opaque identifier used by servers to ensure session state is not shared between applications. Unused by clients.
- sessionTimeout?: number
The number of seconds after which a TLS session created by the server will no longer be resumable. See Session Resumption for more information. Default: 300.
- sigalgs?: string
Colon-separated list of supported signature algorithms. The list can contain digest algorithms (SHA256, MD5 etc.), public key algorithms (RSA-PSS, ECDSA etc.), combination of both (e.g 'RSA+SHA384') or TLS v1.3 scheme names (e.g. rsa_pss_pss_sha512).
- SNICallback?: (servername: string, cb: (err: null | Error, ctx?: SecureContext) => void) => void
SNICallback(servername, cb) <Function> A function that will be called if the client supports SNI TLS extension. Two arguments will be passed when called: servername and cb. SNICallback should invoke cb(null, ctx), where ctx is a SecureContext instance. (tls.createSecureContext(...) can be used to get a proper SecureContext.) If SNICallback wasn't provided the default callback with high-level API will be used (see below).
- strictFieldWhitespaceValidation?: boolean
If
true, it turns on strict leading and trailing whitespace validation for HTTP/2 header field names and values as per RFC-9113. - unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'][] is emitted. If the socket has not been destroyed by that time the server will destroy it.
interface SecureServerSessionOptions<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
- allowPartialTrustChain?: boolean
Treat intermediate (non-self-signed) certificates in the trust CA certificate list as trusted.
- ALPNCallback?: (arg: { protocols: string[]; servername: string }) => undefined | string
If set, this will be called when a client opens a connection using the ALPN extension. One argument will be passed to the callback: an object containing
servernameandprotocolsfields, respectively containing the server name from the SNI extension (if any) and an array of ALPN protocol name strings. The callback must return either one of the strings listed inprotocols, which will be returned to the client as the selected ALPN protocol, orundefined, to reject the connection with a fatal alert. If a string is returned that does not match one of the client's ALPN protocols, an error will be thrown. This option cannot be used with theALPNProtocolsoption, and setting both options will throw an error. - ALPNProtocols?: Uint8Array<ArrayBufferLike> | string[] | Uint8Array<ArrayBufferLike>[]
An array of strings or a Buffer naming possible ALPN protocols. (Protocols should be ordered by their priority.)
- blockList?: BlockList
blockListcan be used for disabling inbound access to specific IP addresses, IP ranges, or IP subnets. This does not work if the server is behind a reverse proxy, NAT, etc. because the address checked against the block list is the address of the proxy, or the one specified by the NAT. - cert?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]
Cert chains in PEM format. One cert chain should be provided per private key. Each cert chain should consist of the PEM formatted certificate for a provided private key, followed by the PEM formatted intermediate certificates (if any), in order, and not including the root CA (the root CA must be pre-known to the peer, see ca). When providing multiple cert chains, they do not have to be in the same order as their private keys in key. If the intermediate certificates are not provided, the peer will not be able to validate the certificate, and the handshake will fail.
- ciphers?: string
Cipher suite specification, replacing the default. For more information, see modifying the default cipher suite. Permitted ciphers can be obtained via tls.getCiphers(). Cipher names must be uppercased in order for OpenSSL to accept them.
- ecdhCurve?: string
A string describing a named curve or a colon separated list of curve NIDs or names, for example P-521:P-384:P-256, to use for ECDH key agreement. Set to auto to select the curve automatically. Use crypto.getCurves() to obtain a list of available curve names. On recent releases, openssl ecparam -list_curves will also display the name and description of each available elliptic curve. Default: tls.DEFAULT_ECDH_CURVE.
- enableTrace?: boolean
When enabled, TLS packet trace information is written to
stderr. This can be used to debug TLS connection problems. - handshakeTimeout?: number
Abort the connection if the SSL/TLS handshake does not finish in the specified number of milliseconds. A 'tlsClientError' is emitted on the tls.Server object whenever a handshake times out. Default: 120000 (120 seconds).
- highWaterMark?: number
Optionally overrides all
net.Sockets'readableHighWaterMarkandwritableHighWaterMark. - honorCipherOrder?: boolean
Attempt to use the server's cipher suite preferences instead of the client's. When true, causes SSL_OP_CIPHER_SERVER_PREFERENCE to be set in secureOptions
- keepAlive?: boolean
If set to
true, it enables keep-alive functionality on the socket immediately after a new incoming connection is received, similarly on what is done insocket.setKeepAlive([enable][, initialDelay]). - keepAliveInitialDelay?: number
If set to a positive number, it sets the initial delay before the first keepalive probe is sent on an idle socket.
- key?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | KeyObject[]
Private keys in PEM format. PEM allows the option of private keys being encrypted. Encrypted keys will be decrypted with options.passphrase. Multiple keys using different algorithms can be provided either as an array of unencrypted key strings or buffers, or an array of objects in the form {pem: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted keys will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- maxHeaderListPairs?: number
Sets the maximum number of header entries. This is similar to
server.maxHeadersCountorrequest.maxHeadersCountin thenode:httpmodule. The minimum value is1. - maxSendHeaderBlockLength?: number
Sets the maximum allowed size for a serialized, compressed block of headers. Attempts to send headers that exceed this limit will result in a
'frameError'event being emitted and the stream being closed and destroyed. - maxSessionMemory?: number
Sets the maximum memory that the
Http2Sessionis permitted to use. The value is expressed in terms of number of megabytes, e.g.1equal 1 megabyte. The minimum value allowed is1. This is a credit based limit, existingHttp2Streams may cause this limit to be exceeded, but newHttp2Streaminstances will be rejected while this limit is exceeded. The current number ofHttp2Streamsessions, the current memory use of the header compression tables, current data queued to be sent, and unacknowledgedPINGandSETTINGSframes are all counted towards the current limit. - maxSettings?: number
Sets the maximum number of settings entries per
SETTINGSframe. The minimum value allowed is1. - maxVersion?: SecureVersion
Optionally set the maximum TLS version to allow. One of
'TLSv1.3','TLSv1.2','TLSv1.1', or'TLSv1'. Cannot be specified along with thesecureProtocoloption, use one or the other. Default:'TLSv1.3', unless changed using CLI options. Using--tls-max-v1.2sets the default to'TLSv1.2'. Using--tls-max-v1.3sets the default to'TLSv1.3'. If multiple of the options are provided, the highest maximum is used. - minVersion?: SecureVersion
Optionally set the minimum TLS version to allow. One of
'TLSv1.3','TLSv1.2','TLSv1.1', or'TLSv1'. Cannot be specified along with thesecureProtocoloption, use one or the other. It is not recommended to use less than TLSv1.2, but it may be required for interoperability. Default:'TLSv1.2', unless changed using CLI options. Using--tls-v1.0sets the default to'TLSv1'. Using--tls-v1.1sets the default to'TLSv1.1'. Using--tls-min-v1.3sets the default to 'TLSv1.3'. If multiple of the options are provided, the lowest minimum is used. - noDelay?: boolean
If set to
true, it disables the use of Nagle's algorithm immediately after a new incoming connection is received. - paddingStrategy?: number
Strategy used for determining the amount of padding to use for
HEADERSandDATAframes. - peerMaxConcurrentStreams?: number
Sets the maximum number of concurrent streams for the remote peer as if a
SETTINGSframe had been received. Will be overridden if the remote peer sets its own value formaxConcurrentStreams. - pfx?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | PxfObject[]
PFX or PKCS12 encoded private key and certificate chain. pfx is an alternative to providing key and cert individually. PFX is usually encrypted, if it is, passphrase will be used to decrypt it. Multiple PFX can be provided either as an array of unencrypted PFX buffers, or an array of objects in the form {buf: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted PFX will be decrypted with object.passphrase if provided, or options.passphrase if it is not.
- pskIdentityHint?: string
hint to send to a client to help with selecting the identity during TLS-PSK negotiation. Will be ignored in TLS 1.3. Upon failing to set pskIdentityHint
tlsClientErrorwill be emitted withERR_TLS_PSK_SET_IDENTIY_HINT_FAILEDcode. - remoteCustomSettings?: number[]
The array of integer values determines the settings types, which are included in the
CustomSettings-property of the received remoteSettings. Please see theCustomSettings-property of theHttp2Settingsobject for more information, on the allowed setting types. - requestCert?: boolean
If true the server will request a certificate from clients that connect and attempt to verify that certificate. Defaults to false.
- secureOptions?: number
Optionally affect the OpenSSL protocol behavior, which is not usually necessary. This should be used carefully if at all! Value is a numeric bitmask of the SSL_OP_* options from OpenSSL Options
- secureProtocol?: string
Legacy mechanism to select the TLS protocol version to use, it does not support independent control of the minimum and maximum version, and does not support limiting the protocol to TLSv1.3. Use minVersion and maxVersion instead. The possible values are listed as SSL_METHODS, use the function names as strings. For example, use 'TLSv1_1_method' to force TLS version 1.1, or 'TLS_method' to allow any TLS protocol version up to TLSv1.3. It is not recommended to use TLS versions less than 1.2, but it may be required for interoperability. Default: none, see minVersion.
- sessionIdContext?: string
Opaque identifier used by servers to ensure session state is not shared between applications. Unused by clients.
- sessionTimeout?: number
The number of seconds after which a TLS session created by the server will no longer be resumable. See Session Resumption for more information. Default: 300.
- sigalgs?: string
Colon-separated list of supported signature algorithms. The list can contain digest algorithms (SHA256, MD5 etc.), public key algorithms (RSA-PSS, ECDSA etc.), combination of both (e.g 'RSA+SHA384') or TLS v1.3 scheme names (e.g. rsa_pss_pss_sha512).
- SNICallback?: (servername: string, cb: (err: null | Error, ctx?: SecureContext) => void) => void
SNICallback(servername, cb) <Function> A function that will be called if the client supports SNI TLS extension. Two arguments will be passed when called: servername and cb. SNICallback should invoke cb(null, ctx), where ctx is a SecureContext instance. (tls.createSecureContext(...) can be used to get a proper SecureContext.) If SNICallback wasn't provided the default callback with high-level API will be used (see below).
- strictFieldWhitespaceValidation?: boolean
If
true, it turns on strict leading and trailing whitespace validation for HTTP/2 header field names and values as per RFC-9113. - unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'][] is emitted. If the socket has not been destroyed by that time the server will destroy it.
interface ServerHttp2Session<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
The
EventEmitterclass is defined and exposed by thenode:eventsmodule:import { EventEmitter } from 'node:events';All
EventEmitters emit the event'newListener'when new listeners are added and'removeListener'when existing listeners are removed.It supports the following option:
- readonly alpnProtocol?: string
Value will be
undefinedif theHttp2Sessionis not yet connected to a socket,h2cif theHttp2Sessionis not connected to aTLSSocket, or will return the value of the connectedTLSSocket's ownalpnProtocolproperty. - readonly closed: boolean
Will be
trueif thisHttp2Sessioninstance has been closed, otherwisefalse. - readonly connecting: boolean
Will be
trueif thisHttp2Sessioninstance is still connecting, will be set tofalsebefore emittingconnectevent and/or calling thehttp2.connectcallback. - readonly destroyed: boolean
Will be
trueif thisHttp2Sessioninstance has been destroyed and must no longer be used, otherwisefalse. - readonly encrypted?: boolean
Value is
undefinedif theHttp2Sessionsession socket has not yet been connected,trueif theHttp2Sessionis connected with aTLSSocket, andfalseif theHttp2Sessionis connected to any other kind of socket or stream. - readonly localSettings: Settings
A prototype-less object describing the current local settings of this
Http2Session. The local settings are local to thisHttp2Sessioninstance. - readonly originSet?: string[]
If the
Http2Sessionis connected to aTLSSocket, theoriginSetproperty will return anArrayof origins for which theHttp2Sessionmay be considered authoritative.The
originSetproperty is only available when using a secure TLS connection. - readonly pendingSettingsAck: boolean
Indicates whether the
Http2Sessionis currently waiting for acknowledgment of a sentSETTINGSframe. Will betrueafter calling thehttp2session.settings()method. Will befalseonce all sentSETTINGSframes have been acknowledged. - readonly remoteSettings: Settings
A prototype-less object describing the current remote settings of this
Http2Session. The remote settings are set by the connected HTTP/2 peer. - readonly server: Http2Server<Http1Request, Http1Response, Http2Request, Http2Response> | Http2SecureServer<Http1Request, Http1Response, Http2Request, Http2Response>
- readonly socket: Socket | TLSSocket
Returns a
Proxyobject that acts as anet.Socket(ortls.TLSSocket) but limits available methods to ones safe to use with HTTP/2.destroy,emit,end,pause,read,resume, andwritewill throw an error with codeERR_HTTP2_NO_SOCKET_MANIPULATION. SeeHttp2Session and Socketsfor more information.setTimeoutmethod will be called on thisHttp2Session.All other interactions will be routed directly to the socket.
- readonly state: SessionState
Provides miscellaneous information about the current state of the
Http2Session.An object describing the current status of this
Http2Session. - readonly type: number
The
http2session.typewill be equal tohttp2.constants.NGHTTP2_SESSION_SERVERif thisHttp2Sessioninstance is a server, andhttp2.constants.NGHTTP2_SESSION_CLIENTif the instance is a client. - event: 'connect',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void): this;
Alias for
emitter.on(eventName, listener).event: 'stream',): this;Alias for
emitter.on(eventName, listener).event: string | symbol,listener: (...args: any[]) => void): this;Alias for
emitter.on(eventName, listener). - alt: string,): void;
Submits an
ALTSVCframe (as defined by RFC 7838) to the connected client.import http2 from 'node:http2'; const server = http2.createServer(); server.on('session', (session) => { // Set altsvc for origin https://example.org:80 session.altsvc('h2=":8000"', 'https://example.org:80'); }); server.on('stream', (stream) => { // Set altsvc for a specific stream stream.session.altsvc('h2=":8000"', stream.id); });Sending an
ALTSVCframe with a specific stream ID indicates that the alternate service is associated with the origin of the givenHttp2Stream.The
altand origin string must contain only ASCII bytes and are strictly interpreted as a sequence of ASCII bytes. The special value'clear'may be passed to clear any previously set alternative service for a given domain.When a string is passed for the
originOrStreamargument, it will be parsed as a URL and the origin will be derived. For instance, the origin for the HTTP URL'https://example.org/foo/bar'is the ASCII string'https://example.org'. An error will be thrown if either the given string cannot be parsed as a URL or if a valid origin cannot be derived.A
URLobject, or any object with anoriginproperty, may be passed asoriginOrStream, in which case the value of theoriginproperty will be used. The value of theoriginproperty must be a properly serialized ASCII origin.@param altA description of the alternative service configuration as defined by
RFC 7838.@param originOrStreamEither a URL string specifying the origin (or an
Objectwith anoriginproperty) or the numeric identifier of an activeHttp2Streamas given by thehttp2stream.idproperty. - callback?: () => void): void;
Gracefully closes the
Http2Session, allowing any existing streams to complete on their own and preventing newHttp2Streaminstances from being created. Once closed,http2session.destroy()might be called if there are no openHttp2Streaminstances.If specified, the
callbackfunction is registered as a handler for the'close'event. - code?: number): void;
Immediately terminates the
Http2Sessionand the associatednet.Socketortls.TLSSocket.Once destroyed, the
Http2Sessionwill emit the'close'event. Iferroris not undefined, an'error'event will be emitted immediately before the'close'event.If there are any remaining open
Http2Streamsassociated with theHttp2Session, those will also be destroyed.@param errorAn
Errorobject if theHttp2Sessionis being destroyed due to an error.@param codeThe HTTP/2 error code to send in the final
GOAWAYframe. If unspecified, anderroris not undefined, the default isINTERNAL_ERROR, otherwise defaults toNO_ERROR. - emit(event: 'connect',): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: 'stream',flags: number): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listeneremit(event: string | symbol,...args: any[]): boolean;Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.- code?: number,lastStreamID?: number,opaqueData?: ArrayBufferView<ArrayBufferLike>): void;
Transmits a
GOAWAYframe to the connected peer without shutting down theHttp2Session.@param codeAn HTTP/2 error code
@param lastStreamIDThe numeric ID of the last processed
Http2Stream@param opaqueDataA
TypedArrayorDataViewinstance containing additional data to be carried within theGOAWAYframe. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'connect',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: 'stream',): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
on(event: string | symbol,listener: (...args: any[]) => void): this;Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'connect',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: 'stream',): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
once(event: string | symbol,listener: (...args: any[]) => void): this;Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- ): void;
Submits an
ORIGINframe (as defined by RFC 8336) to the connected client to advertise the set of origins for which the server is capable of providing authoritative responses.import http2 from 'node:http2'; const options = getSecureOptionsSomehow(); const server = http2.createSecureServer(options); server.on('stream', (stream) => { stream.respond(); stream.end('ok'); }); server.on('session', (session) => { session.origin('https://example.com', 'https://example.org'); });When a string is passed as an
origin, it will be parsed as a URL and the origin will be derived. For instance, the origin for the HTTP URL'https://example.org/foo/bar'is the ASCII string'https://example.org'. An error will be thrown if either the given string cannot be parsed as a URL or if a valid origin cannot be derived.A
URLobject, or any object with anoriginproperty, may be passed as anorigin, in which case the value of theoriginproperty will be used. The value of theoriginproperty must be a properly serialized ASCII origin.Alternatively, the
originsoption may be used when creating a new HTTP/2 server using thehttp2.createSecureServer()method:import http2 from 'node:http2'; const options = getSecureOptionsSomehow(); options.origins = ['https://example.com', 'https://example.org']; const server = http2.createSecureServer(options); server.on('stream', (stream) => { stream.respond(); stream.end('ok'); });@param originsOne or more URL Strings passed as separate arguments.
- ping(): boolean;
Sends a
PINGframe to the connected HTTP/2 peer. Acallbackfunction must be provided. The method will returntrueif thePINGwas sent,falseotherwise.The maximum number of outstanding (unacknowledged) pings is determined by the
maxOutstandingPingsconfiguration option. The default maximum is 10.If provided, the
payloadmust be aBuffer,TypedArray, orDataViewcontaining 8 bytes of data that will be transmitted with thePINGand returned with the ping acknowledgment.The callback will be invoked with three arguments: an error argument that will be
nullif thePINGwas successfully acknowledged, adurationargument that reports the number of milliseconds elapsed since the ping was sent and the acknowledgment was received, and aBuffercontaining the 8-bytePINGpayload.session.ping(Buffer.from('abcdefgh'), (err, duration, payload) => { if (!err) { console.log(`Ping acknowledged in ${duration} milliseconds`); console.log(`With payload '${payload.toString()}'`); } });If the
payloadargument is not specified, the default payload will be the 64-bit timestamp (little endian) marking the start of thePINGduration. - event: 'connect',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'stream',): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: string | symbol,listener: (...args: any[]) => void): this;Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'connect',listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'stream',): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: string | symbol,listener: (...args: any[]) => void): this;Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); Calls
ref()on thisHttp2Sessioninstance's underlyingnet.Socket.- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - eventName: string | symbol,listener: (...args: any[]) => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. - windowSize: number): void;
Sets the local endpoint's window size. The
windowSizeis the total window size to set, not the delta.import http2 from 'node:http2'; const server = http2.createServer(); const expectedWindowSize = 2 ** 20; server.on('connect', (session) => { // Set local window size to be 2 ** 20 session.setLocalWindowSize(expectedWindowSize); }); - n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - msecs: number,callback?: () => void): void;
Used to set a callback function that is called when there is no activity on the
Http2Sessionaftermsecsmilliseconds. The givencallbackis registered as a listener on the'timeout'event. - ): void;
Updates the current local settings for this
Http2Sessionand sends a newSETTINGSframe to the connected HTTP/2 peer.Once called, the
http2session.pendingSettingsAckproperty will betruewhile the session is waiting for the remote peer to acknowledge the new settings.The new settings will not become effective until the
SETTINGSacknowledgment is received and the'localSettings'event is emitted. It is possible to send multipleSETTINGSframes while acknowledgment is still pending.@param callbackCallback that is called once the session is connected or right away if the session is already connected.
Calls
unref()on thisHttp2Sessioninstance's underlyingnet.Socket.
interface ServerHttp2Stream
Duplex streams are streams that implement both the
ReadableandWritableinterfaces.Examples of
Duplexstreams include:TCP socketszlib streamscrypto streams
- readonly aborted: boolean
Set to
trueif theHttp2Streaminstance was aborted abnormally. When set, the'aborted'event will have been emitted. - allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readonly bufferSize: number
This property shows the number of characters currently buffered to be written. See
net.Socket.bufferSizefor details. - readonly destroyed: boolean
Set to
trueif theHttp2Streaminstance has been destroyed and is no longer usable. - readonly endAfterHeaders: boolean
Set to
trueif theEND_STREAMflag was set in the request or response HEADERS frame received, indicating that no additional data should be received and the readable side of theHttp2Streamwill be closed. - readonly id?: number
The numeric stream identifier of this
Http2Streaminstance. Set toundefinedif the stream identifier has not yet been assigned. - readonly pending: boolean
Set to
trueif theHttp2Streaminstance has not yet been assigned a numeric stream identifier. - readonly pushAllowed: boolean
Read-only property mapped to the
SETTINGS_ENABLE_PUSHflag of the remote client's most recentSETTINGSframe. Will betrueif the remote peer accepts push streams,falseotherwise. Settings are the same for everyHttp2Streamin the sameHttp2Session. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly rstCode: number
Set to the
RST_STREAMerror codereported when theHttp2Streamis destroyed after either receiving anRST_STREAMframe from the connected peer, callinghttp2stream.close(), orhttp2stream.destroy(). Will beundefinedif theHttp2Streamhas not been closed. - readonly sentHeaders: OutgoingHttpHeaders
An object containing the outbound headers sent for this
Http2Stream. - readonly sentInfoHeaders?: OutgoingHttpHeaders[]
An array of objects containing the outbound informational (additional) headers sent for this
Http2Stream. - readonly sentTrailers?: OutgoingHttpHeaders
An object containing the outbound trailers sent for this
HttpStream. - readonly session: undefined | Http2Session
A reference to the
Http2Sessioninstance that owns thisHttp2Stream. The value will beundefinedafter theHttp2Streaminstance is destroyed. - readonly state: StreamState
Provides miscellaneous information about the current state of the
Http2Stream.A current state of this
Http2Stream. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - ): void;
Sends an additional informational
HEADERSframe to the connected HTTP/2 peer. - event: 'aborted',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'close',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'frameError',listener: (frameType: number, errorCode: number) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'streamClosed',listener: (code: number) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'timeout',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'trailers',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'wantTrailers',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- code?: number,callback?: () => void): void;
Closes the
Http2Streaminstance by sending anRST_STREAMframe to the connected HTTP/2 peer.@param codeUnsigned 32-bit integer identifying the error code.
@param callbackAn optional function registered to listen for the
'close'event. - stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'aborted'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'aborted',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'aborted',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'aborted',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'frameError',listener: (frameType: number, errorCode: number) => void): this;event: 'trailers',): this; - event: 'aborted',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
event: 'data',): this;event: 'frameError',listener: (frameType: number, errorCode: number) => void): this;event: 'trailers',): this; - ): void;
Initiates a push stream. The callback is invoked with the new
Http2Streaminstance created for the push stream passed as the second argument, or anErrorpassed as the first argument.import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respond({ ':status': 200 }); stream.pushStream({ ':path': '/' }, (err, pushStream, headers) => { if (err) throw err; pushStream.respond({ ':status': 200 }); pushStream.end('some pushed data'); }); stream.end('some data'); });Setting the weight of a push stream is not allowed in the
HEADERSframe. Pass aweightvalue tohttp2stream.prioritywith thesilentoption set totrueto enable server-side bandwidth balancing between concurrent streams.Calling
http2stream.pushStream()from within a pushed stream is not permitted and will throw an error.@param callbackCallback that is called once the push stream has been initiated.
): void; - eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. - ): void;
import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respond({ ':status': 200 }); stream.end('some data'); });Initiates a response. When the
options.waitForTrailersoption is set, the'wantTrailers'event will be emitted immediately after queuing the last chunk of payload data to be sent. Thehttp2stream.sendTrailers()method can then be used to send trailing header fields to the peer.When
options.waitForTrailersis set, theHttp2Streamwill not automatically close when the finalDATAframe is transmitted. User code must call eitherhttp2stream.sendTrailers()orhttp2stream.close()to close theHttp2Stream.import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respond({ ':status': 200 }, { waitForTrailers: true }); stream.on('wantTrailers', () => { stream.sendTrailers({ ABC: 'some value to send' }); }); stream.end('some data'); }); - ): void;
Initiates a response whose data is read from the given file descriptor. No validation is performed on the given file descriptor. If an error occurs while attempting to read data using the file descriptor, the
Http2Streamwill be closed using anRST_STREAMframe using the standardINTERNAL_ERRORcode.When used, the
Http2Streamobject'sDuplexinterface will be closed automatically.import http2 from 'node:http2'; import fs from 'node:fs'; const server = http2.createServer(); server.on('stream', (stream) => { const fd = fs.openSync('/some/file', 'r'); const stat = fs.fstatSync(fd); const headers = { 'content-length': stat.size, 'last-modified': stat.mtime.toUTCString(), 'content-type': 'text/plain; charset=utf-8', }; stream.respondWithFD(fd, headers); stream.on('close', () => fs.closeSync(fd)); });The optional
options.statCheckfunction may be specified to give user code an opportunity to set additional content headers based on thefs.Statdetails of the given fd. If thestatCheckfunction is provided, thehttp2stream.respondWithFD()method will perform anfs.fstat()call to collect details on the provided file descriptor.The
offsetandlengthoptions may be used to limit the response to a specific range subset. This can be used, for instance, to support HTTP Range requests.The file descriptor or
FileHandleis not closed when the stream is closed, so it will need to be closed manually once it is no longer needed. Using the same file descriptor concurrently for multiple streams is not supported and may result in data loss. Re-using a file descriptor after a stream has finished is supported.When the
options.waitForTrailersoption is set, the'wantTrailers'event will be emitted immediately after queuing the last chunk of payload data to be sent. Thehttp2stream.sendTrailers()method can then be used to sent trailing header fields to the peer.When
options.waitForTrailersis set, theHttp2Streamwill not automatically close when the finalDATAframe is transmitted. User code must call eitherhttp2stream.sendTrailers()orhttp2stream.close()to close theHttp2Stream.import http2 from 'node:http2'; import fs from 'node:fs'; const server = http2.createServer(); server.on('stream', (stream) => { const fd = fs.openSync('/some/file', 'r'); const stat = fs.fstatSync(fd); const headers = { 'content-length': stat.size, 'last-modified': stat.mtime.toUTCString(), 'content-type': 'text/plain; charset=utf-8', }; stream.respondWithFD(fd, headers, { waitForTrailers: true }); stream.on('wantTrailers', () => { stream.sendTrailers({ ABC: 'some value to send' }); }); stream.on('close', () => fs.closeSync(fd)); });@param fdA readable file descriptor.
- path: string,): void;
Sends a regular file as the response. The
pathmust specify a regular file or an'error'event will be emitted on theHttp2Streamobject.When used, the
Http2Streamobject'sDuplexinterface will be closed automatically.The optional
options.statCheckfunction may be specified to give user code an opportunity to set additional content headers based on thefs.Statdetails of the given file:If an error occurs while attempting to read the file data, the
Http2Streamwill be closed using anRST_STREAMframe using the standardINTERNAL_ERRORcode. If theonErrorcallback is defined, then it will be called. Otherwise, the stream will be destroyed.Example using a file path:
import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { function statCheck(stat, headers) { headers['last-modified'] = stat.mtime.toUTCString(); } function onError(err) { // stream.respond() can throw if the stream has been destroyed by // the other side. try { if (err.code === 'ENOENT') { stream.respond({ ':status': 404 }); } else { stream.respond({ ':status': 500 }); } } catch (err) { // Perform actual error handling. console.error(err); } stream.end(); } stream.respondWithFile('/some/file', { 'content-type': 'text/plain; charset=utf-8' }, { statCheck, onError }); });The
options.statCheckfunction may also be used to cancel the send operation by returningfalse. For instance, a conditional request may check the stat results to determine if the file has been modified to return an appropriate304response:import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { function statCheck(stat, headers) { // Check the stat here... stream.respond({ ':status': 304 }); return false; // Cancel the send operation } stream.respondWithFile('/some/file', { 'content-type': 'text/plain; charset=utf-8' }, { statCheck }); });The
content-lengthheader field will be automatically set.The
offsetandlengthoptions may be used to limit the response to a specific range subset. This can be used, for instance, to support HTTP Range requests.The
options.onErrorfunction may also be used to handle all the errors that could happen before the delivery of the file is initiated. The default behavior is to destroy the stream.When the
options.waitForTrailersoption is set, the'wantTrailers'event will be emitted immediately after queuing the last chunk of payload data to be sent. Thehttp2stream.sendTrailers()method can then be used to sent trailing header fields to the peer.When
options.waitForTrailersis set, theHttp2Streamwill not automatically close when the finalDATAframe is transmitted. User code must call eitherhttp2stream.sendTrailers()orhttp2stream.close()to close theHttp2Stream.import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respondWithFile('/some/file', { 'content-type': 'text/plain; charset=utf-8' }, { waitForTrailers: true }); stream.on('wantTrailers', () => { stream.sendTrailers({ ABC: 'some value to send' }); }); }); The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- ): void;
Sends a trailing
HEADERSframe to the connected HTTP/2 peer. This method will cause theHttp2Streamto be immediately closed and must only be called after the'wantTrailers'event has been emitted. When sending a request or sending a response, theoptions.waitForTrailersoption must be set in order to keep theHttp2Streamopen after the finalDATAframe so that trailers can be sent.import http2 from 'node:http2'; const server = http2.createServer(); server.on('stream', (stream) => { stream.respond(undefined, { waitForTrailers: true }); stream.on('wantTrailers', () => { stream.sendTrailers({ xyz: 'abc' }); }); stream.end('Hello World'); });The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-header fields (e.g.
':method',':path', etc). - encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - msecs: number,callback?: () => void): void;
import http2 from 'node:http2'; const client = http2.connect('http://example.org:8000'); const { NGHTTP2_CANCEL } = http2.constants; const req = client.request({ ':path': '/' }); // Cancel the stream if there's no activity after 5 seconds req.setTimeout(5000, () => req.close(NGHTTP2_CANCEL)); - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
interface ServerOptions<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
- maxHeaderListPairs?: number
Sets the maximum number of header entries. This is similar to
server.maxHeadersCountorrequest.maxHeadersCountin thenode:httpmodule. The minimum value is1. - maxSendHeaderBlockLength?: number
Sets the maximum allowed size for a serialized, compressed block of headers. Attempts to send headers that exceed this limit will result in a
'frameError'event being emitted and the stream being closed and destroyed. - maxSessionMemory?: number
Sets the maximum memory that the
Http2Sessionis permitted to use. The value is expressed in terms of number of megabytes, e.g.1equal 1 megabyte. The minimum value allowed is1. This is a credit based limit, existingHttp2Streams may cause this limit to be exceeded, but newHttp2Streaminstances will be rejected while this limit is exceeded. The current number ofHttp2Streamsessions, the current memory use of the header compression tables, current data queued to be sent, and unacknowledgedPINGandSETTINGSframes are all counted towards the current limit. - maxSettings?: number
Sets the maximum number of settings entries per
SETTINGSframe. The minimum value allowed is1. - paddingStrategy?: number
Strategy used for determining the amount of padding to use for
HEADERSandDATAframes. - peerMaxConcurrentStreams?: number
Sets the maximum number of concurrent streams for the remote peer as if a
SETTINGSframe had been received. Will be overridden if the remote peer sets its own value formaxConcurrentStreams. - remoteCustomSettings?: number[]
The array of integer values determines the settings types, which are included in the
CustomSettings-property of the received remoteSettings. Please see theCustomSettings-property of theHttp2Settingsobject for more information, on the allowed setting types. - strictFieldWhitespaceValidation?: boolean
If
true, it turns on strict leading and trailing whitespace validation for HTTP/2 header field names and values as per RFC-9113. - unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'][] is emitted. If the socket has not been destroyed by that time the server will destroy it.
interface ServerSessionOptions<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>
- maxHeaderListPairs?: number
Sets the maximum number of header entries. This is similar to
server.maxHeadersCountorrequest.maxHeadersCountin thenode:httpmodule. The minimum value is1. - maxSendHeaderBlockLength?: number
Sets the maximum allowed size for a serialized, compressed block of headers. Attempts to send headers that exceed this limit will result in a
'frameError'event being emitted and the stream being closed and destroyed. - maxSessionMemory?: number
Sets the maximum memory that the
Http2Sessionis permitted to use. The value is expressed in terms of number of megabytes, e.g.1equal 1 megabyte. The minimum value allowed is1. This is a credit based limit, existingHttp2Streams may cause this limit to be exceeded, but newHttp2Streaminstances will be rejected while this limit is exceeded. The current number ofHttp2Streamsessions, the current memory use of the header compression tables, current data queued to be sent, and unacknowledgedPINGandSETTINGSframes are all counted towards the current limit. - maxSettings?: number
Sets the maximum number of settings entries per
SETTINGSframe. The minimum value allowed is1. - paddingStrategy?: number
Strategy used for determining the amount of padding to use for
HEADERSandDATAframes. - peerMaxConcurrentStreams?: number
Sets the maximum number of concurrent streams for the remote peer as if a
SETTINGSframe had been received. Will be overridden if the remote peer sets its own value formaxConcurrentStreams. - remoteCustomSettings?: number[]
The array of integer values determines the settings types, which are included in the
CustomSettings-property of the received remoteSettings. Please see theCustomSettings-property of theHttp2Settingsobject for more information, on the allowed setting types. - strictFieldWhitespaceValidation?: boolean
If
true, it turns on strict leading and trailing whitespace validation for HTTP/2 header field names and values as per RFC-9113. - unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'][] is emitted. If the socket has not been destroyed by that time the server will destroy it.
interface ServerStreamFileResponseOptions
interface ServerStreamFileResponseOptionsWithError
interface ServerStreamResponseOptions
interface SessionOptions
- maxHeaderListPairs?: number
Sets the maximum number of header entries. This is similar to
server.maxHeadersCountorrequest.maxHeadersCountin thenode:httpmodule. The minimum value is1. - maxSendHeaderBlockLength?: number
Sets the maximum allowed size for a serialized, compressed block of headers. Attempts to send headers that exceed this limit will result in a
'frameError'event being emitted and the stream being closed and destroyed. - maxSessionMemory?: number
Sets the maximum memory that the
Http2Sessionis permitted to use. The value is expressed in terms of number of megabytes, e.g.1equal 1 megabyte. The minimum value allowed is1. This is a credit based limit, existingHttp2Streams may cause this limit to be exceeded, but newHttp2Streaminstances will be rejected while this limit is exceeded. The current number ofHttp2Streamsessions, the current memory use of the header compression tables, current data queued to be sent, and unacknowledgedPINGandSETTINGSframes are all counted towards the current limit. - maxSettings?: number
Sets the maximum number of settings entries per
SETTINGSframe. The minimum value allowed is1. - paddingStrategy?: number
Strategy used for determining the amount of padding to use for
HEADERSandDATAframes. - peerMaxConcurrentStreams?: number
Sets the maximum number of concurrent streams for the remote peer as if a
SETTINGSframe had been received. Will be overridden if the remote peer sets its own value formaxConcurrentStreams. - remoteCustomSettings?: number[]
The array of integer values determines the settings types, which are included in the
CustomSettings-property of the received remoteSettings. Please see theCustomSettings-property of theHttp2Settingsobject for more information, on the allowed setting types. - strictFieldWhitespaceValidation?: boolean
If
true, it turns on strict leading and trailing whitespace validation for HTTP/2 header field names and values as per RFC-9113. - unknownProtocolTimeout?: number
Specifies a timeout in milliseconds that a server should wait when an [
'unknownProtocol'][] is emitted. If the socket has not been destroyed by that time the server will destroy it.
interface SessionState
interface Settings
interface StatOptions
interface StreamState