Bun

Node.js module

http2

The 'node:http2' module provides an API for HTTP/2 clients and servers, including support for multiplexing streams, HPACK header compression, and server push.

Works in Bun

Client & server are implemented (95.25% of gRPC's test suite passes). Some options, the ALTSVC extension, and server push functionality are missing.

  • namespace constants

  • class Http2ServerRequest

    A Http2ServerRequest object is created by Server or SecureServer and passed as the first argument to the 'request' event. It may be used to access a request status, headers, and data.

    • readonly aborted: boolean

      The request.aborted property will be true if the request has been aborted.

    • readonly authority: string

      The request authority pseudo header field. Because HTTP/2 allows requests to set either :authority or host, this value is derived from req.headers[':authority'] if present. Otherwise, it is derived from req.headers['host'].

    • readonly closed: boolean

      Is true after 'close' has been emitted.

    • readonly complete: boolean

      The request.complete property will be true if the request has been completed, aborted, or destroyed.

    • destroyed: boolean

      Is true after readable.destroy() has been called.

    • readonly errored: null | Error

      Returns error if the stream has been destroyed with an error.

    • readonly headers: IncomingHttpHeaders

      The request/response headers object.

      Key-value pairs of header names and values. Header names are lower-cased.

      // Prints something like:
      //
      // { 'user-agent': 'curl/7.22.0',
      //   host: '127.0.0.1:8000',
      //   accept: '*' }
      console.log(request.headers);
      

      See HTTP/2 Headers Object.

      In HTTP/2, the request path, host name, protocol, and method are represented as special headers prefixed with the : character (e.g. ':path'). These special headers will be included in the request.headers object. Care must be taken not to inadvertently modify these special headers or errors may occur. For instance, removing all headers from the request will cause errors to occur:

      removeAllHeaders(request.headers);
      assert(request.url);   // Fails because the :path header has been removed
      
    • readonly httpVersion: string

      In case of server request, the HTTP version sent by the client. In the case of client response, the HTTP version of the connected-to server. Returns '2.0'.

      Also message.httpVersionMajor is the first integer and message.httpVersionMinor is the second.

    • readonly httpVersionMajor: number
    • readonly httpVersionMinor: number
    • readonly method: string

      The request method as a string. Read-only. Examples: 'GET', 'DELETE'.

    • readonly rawHeaders: string[]

      The raw request/response headers list exactly as they were received.

      The keys and values are in the same list. It is not a list of tuples. So, the even-numbered offsets are key values, and the odd-numbered offsets are the associated values.

      Header names are not lowercased, and duplicates are not merged.

      // Prints something like:
      //
      // [ 'user-agent',
      //   'this is invalid because there can be only one',
      //   'User-Agent',
      //   'curl/7.22.0',
      //   'Host',
      //   '127.0.0.1:8000',
      //   'ACCEPT',
      //   '*' ]
      console.log(request.rawHeaders);
      
    • readonly rawTrailers: string[]

      The raw request/response trailer keys and values exactly as they were received. Only populated at the 'end' event.

    • readable: boolean

      Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

    • readonly readableAborted: boolean

      Returns whether the stream was destroyed or errored before emitting 'end'.

    • readonly readableDidRead: boolean

      Returns whether 'data' has been emitted.

    • readonly readableEncoding: null | BufferEncoding

      Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

    • readonly readableEnded: boolean

      Becomes true when 'end' event is emitted.

    • readonly readableFlowing: null | boolean

      This property reflects the current state of a Readable stream as described in the Three states section.

    • readonly readableHighWaterMark: number

      Returns the value of highWaterMark passed when creating this Readable.

    • readonly readableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

    • readonly readableObjectMode: boolean

      Getter for the property objectMode of a given Readable stream.

    • readonly scheme: string

      The request scheme pseudo header field indicating the scheme portion of the target URL.

    • readonly socket: Socket | TLSSocket

      Returns a Proxy object that acts as a net.Socket (or tls.TLSSocket) but applies getters, setters, and methods based on HTTP/2 logic.

      destroyed, readable, and writable properties will be retrieved from and set on request.stream.

      destroy, emit, end, on and once methods will be called on request.stream.

      setTimeout method will be called on request.stream.session.

      pause, read, resume, and write will throw an error with code ERR_HTTP2_NO_SOCKET_MANIPULATION. See Http2Session and Sockets for more information.

      All other interactions will be routed directly to the socket. With TLS support, use request.socket.getPeerCertificate() to obtain the client's authentication details.

    • readonly stream: ServerHttp2Stream

      The Http2Stream object backing the request.

    • readonly trailers: IncomingHttpHeaders

      The request/response trailers object. Only populated at the 'end' event.

    • url: string

      Request URL string. This contains only the URL that is present in the actual HTTP request. If the request is:

      GET /status?name=ryan HTTP/1.1
      Accept: text/plain
      

      Then request.url will be:

      '/status?name=ryan'
      

      To parse the url into its parts, new URL() can be used:

      $ node
      > new URL('/status?name=ryan', 'http://example.com')
      URL {
        href: 'http://example.com/status?name=ryan',
        origin: 'http://example.com',
        protocol: 'http:',
        username: '',
        password: '',
        host: 'example.com',
        hostname: 'example.com',
        port: '',
        pathname: '/status',
        search: '?name=ryan',
        searchParams: URLSearchParams { 'name' => 'ryan' },
        hash: ''
      }
      
    • static captureRejections: boolean

      Value: boolean

      Change the default captureRejections option on all new EventEmitter objects.

    • readonly static captureRejectionSymbol: typeof captureRejectionSymbol

      Value: Symbol.for('nodejs.rejection')

      See how to write a custom rejection handler.

    • static defaultMaxListeners: number

      By default, a maximum of 10 listeners can be registered for any single event. This limit can be changed for individual EventEmitter instances using the emitter.setMaxListeners(n) method. To change the default for allEventEmitter instances, the events.defaultMaxListeners property can be used. If this value is not a positive number, a RangeError is thrown.

      Take caution when setting the events.defaultMaxListeners because the change affects all EventEmitter instances, including those created before the change is made. However, calling emitter.setMaxListeners(n) still has precedence over events.defaultMaxListeners.

      This is not a hard limit. The EventEmitter instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any single EventEmitter, the emitter.getMaxListeners() and emitter.setMaxListeners() methods can be used to temporarily avoid this warning:

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.setMaxListeners(emitter.getMaxListeners() + 1);
      emitter.once('event', () => {
        // do stuff
        emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
      });
      

      The --trace-warnings command-line flag can be used to display the stack trace for such warnings.

      The emitted warning can be inspected with process.on('warning') and will have the additional emitter, type, and count properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Its name property is set to 'MaxListenersExceededWarning'.

    • readonly static errorMonitor: typeof errorMonitor

      This symbol shall be used to install a listener for only monitoring 'error' events. Listeners installed using this symbol are called before the regular 'error' listeners are called.

      Installing a listener using this symbol does not change the behavior once an 'error' event is emitted. Therefore, the process will still crash if no regular 'error' listener is installed.

    • callback: (error?: null | Error) => void
      ): void;
    • error: null | Error,
      callback: (error?: null | Error) => void
      ): void;
    • size: number
      ): void;
    • [Symbol.asyncDispose](): Promise<void>;

      Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

    • [Symbol.asyncIterator](): AsyncIterator<any>;
    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'aborted',
      listener: (hadError: boolean, code: number) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. end
      4. error
      5. pause
      6. readable
      7. resume
      event: 'close',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. end
      4. error
      5. pause
      6. readable
      7. resume
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. end
      4. error
      5. pause
      6. readable
      7. resume
      event: 'end',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. end
      4. error
      5. pause
      6. readable
      7. resume
      event: 'readable',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. end
      4. error
      5. pause
      6. readable
      7. resume
      event: 'error',
      listener: (err: Error) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. end
      4. error
      5. pause
      6. readable
      7. resume
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. end
      4. error
      5. pause
      6. readable
      7. resume
    • options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

      @returns

      a stream of indexed pairs.

    • compose<T extends ReadableStream>(
      stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
      options?: { signal: AbortSignal }
      ): T;
    • error?: Error
      ): this;

      Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

      Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

      Implementors should not override this method, but instead implement readable._destroy().

      @param error

      Error which will be passed as payload in 'error' event

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks dropped from the start.

      @param limit

      the number of chunks to drop from the readable.

      @returns

      a stream with limit chunks dropped from the start.

    • event: 'aborted',
      hadError: boolean,
      code: number
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'close'
      ): boolean;
      event: 'data',
      chunk: string | Buffer<ArrayBufferLike>
      ): boolean;
      event: 'end'
      ): boolean;
      event: 'readable'
      ): boolean;
      event: 'error',
      err: Error
      ): boolean;
      event: string | symbol,
      ...args: any[]
      ): boolean;
    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for every one of the chunks.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions

      This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

      @param fn

      a function to filter chunks from the stream. Async or not.

      @returns

      a stream filtered with the predicate fn.

    • find<T>(
      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
      options?: ArrayOptions
      ): Promise<undefined | T>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<any>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

      It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

      @param fn

      a function to map over every chunk in the stream. May be async. May be a stream or generator.

      @returns

      a stream flat-mapped with the function fn.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
      options?: ArrayOptions
      ): Promise<void>;

      This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

      This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

      This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise for when the stream has finished.

    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • isPaused(): boolean;

      The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

      const readable = new stream.Readable();
      
      readable.isPaused(); // === false
      readable.pause();
      readable.isPaused(); // === true
      readable.resume();
      readable.isPaused(); // === false
      
    • options?: { destroyOnReturn: boolean }
      ): AsyncIterator<any>;

      The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

      @param fn

      a function to map over every chunk in the stream. Async or not.

      @returns

      a stream mapped with the function fn.

    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'aborted',
      listener: (hadError: boolean, code: number) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'close',
      listener: () => void
      ): this;
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'aborted',
      listener: (hadError: boolean, code: number) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'close',
      listener: () => void
      ): this;
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • pause(): this;

      The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

      const readable = getReadableStreamSomehow();
      readable.on('data', (chunk) => {
        console.log(`Received ${chunk.length} bytes of data.`);
        readable.pause();
        console.log('There will be no additional data for 1 second.');
        setTimeout(() => {
          console.log('Now data will start flowing again.');
          readable.resume();
        }, 1000);
      });
      

      The readable.pause() method has no effect if there is a 'readable' event listener.

    • pipe<T extends WritableStream>(
      destination: T,
      options?: { end: boolean }
      ): T;
    • event: 'aborted',
      listener: (hadError: boolean, code: number) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'close',
      listener: () => void
      ): this;
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'aborted',
      listener: (hadError: boolean, code: number) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'close',
      listener: () => void
      ): this;
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • chunk: any,
      encoding?: BufferEncoding
      ): boolean;
    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • size?: number
      ): null | string | Buffer<ArrayBufferLike>;

      The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

      The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

      If the size argument is not specified, all of the data contained in the internal buffer will be returned.

      The size argument must be less than or equal to 1 GiB.

      The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

      const readable = getReadableStreamSomehow();
      
      // 'readable' may be triggered multiple times as data is buffered in
      readable.on('readable', () => {
        let chunk;
        console.log('Stream is readable (new data received in buffer)');
        // Use a loop to make sure we read all currently available data
        while (null !== (chunk = readable.read())) {
          console.log(`Read ${chunk.length} bytes of data...`);
        }
      });
      
      // 'end' will be triggered once when there is no more data available
      readable.on('end', () => {
        console.log('Reached end of stream.');
      });
      

      Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

      Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

      const chunks = [];
      
      readable.on('readable', () => {
        let chunk;
        while (null !== (chunk = readable.read())) {
          chunks.push(chunk);
        }
      });
      
      readable.on('end', () => {
        const content = chunks.join('');
      });
      

      A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

      If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

      Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

      @param size

      Optional argument to specify how much data to read.

    • reduce<T = any>(
      fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial?: undefined,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

      reduce<T = any>(
      fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial: T,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • event: 'close',
      listener: () => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • resume(): this;

      The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

      The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

      getReadableStreamSomehow()
        .resume()
        .on('end', () => {
          console.log('Reached the end, but did not read anything.');
        });
      

      The readable.resume() method has no effect if there is a 'readable' event listener.

    • encoding: BufferEncoding
      ): this;

      The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

      By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

      The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

      const readable = getReadableStreamSomehow();
      readable.setEncoding('utf8');
      readable.on('data', (chunk) => {
        assert.equal(typeof chunk, 'string');
        console.log('Got %d characters of string data:', chunk.length);
      });
      
      @param encoding

      The encoding to use.

    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • msecs: number,
      callback?: () => void
      ): void;

      Sets the Http2Stream's timeout value to msecs. If a callback is provided, then it is added as a listener on the 'timeout' event on the response object.

      If no 'timeout' listener is added to the request, the response, or the server, then Http2Streams are destroyed when they time out. If a handler is assigned to the request, the response, or the server's 'timeout'events, timed out sockets must be handled explicitly.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks.

      @param limit

      the number of chunks to take from the readable.

      @returns

      a stream with limit chunks taken.

    • options?: Pick<ArrayOptions, 'signal'>
      ): Promise<any[]>;

      This method allows easily obtaining the contents of a stream.

      As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

      @returns

      a promise containing an array with the contents of the stream.

    • destination?: WritableStream
      ): this;

      The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

      If the destination is not specified, then all pipes are detached.

      If the destination is specified, but no pipe is set up for it, then the method does nothing.

      import fs from 'node:fs';
      const readable = getReadableStreamSomehow();
      const writable = fs.createWriteStream('file.txt');
      // All the data from readable goes into 'file.txt',
      // but only for the first second.
      readable.pipe(writable);
      setTimeout(() => {
        console.log('Stop writing to file.txt.');
        readable.unpipe(writable);
        console.log('Manually close the file stream.');
        writable.end();
      }, 1000);
      
      @param destination

      Optional specific stream to unpipe

    • chunk: any,
      encoding?: BufferEncoding
      ): void;

      Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

      The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

      The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

      Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

      // Pull off a header delimited by \n\n.
      // Use unshift() if we get too much.
      // Call the callback with (error, header, stream).
      import { StringDecoder } from 'node:string_decoder';
      function parseHeader(stream, callback) {
        stream.on('error', callback);
        stream.on('readable', onReadable);
        const decoder = new StringDecoder('utf8');
        let header = '';
        function onReadable() {
          let chunk;
          while (null !== (chunk = stream.read())) {
            const str = decoder.write(chunk);
            if (str.includes('\n\n')) {
              // Found the header boundary.
              const split = str.split(/\n\n/);
              header += split.shift();
              const remaining = split.join('\n\n');
              const buf = Buffer.from(remaining, 'utf8');
              stream.removeListener('error', callback);
              // Remove the 'readable' listener before unshifting.
              stream.removeListener('readable', onReadable);
              if (buf.length)
                stream.unshift(buf);
              // Now the body of the message can be read from the stream.
              callback(null, header, stream);
              return;
            }
            // Still reading the header.
            header += str;
          }
        }
      }
      

      Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

      @param chunk

      Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

      @param encoding

      Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

    • stream: ReadableStream
      ): this;

      Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

      When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

      It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

      import { OldReader } from './old-api-module.js';
      import { Readable } from 'node:stream';
      const oreader = new OldReader();
      const myReader = new Readable().wrap(oreader);
      
      myReader.on('readable', () => {
        myReader.read(); // etc.
      });
      
      @param stream

      An "old style" readable stream

    • signal: AbortSignal,
      resource: (event: Event) => void
      ): Disposable;

      Listens once to the abort event on the provided signal.

      Listening to the abort event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can call e.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.

      This API allows safely using AbortSignals in Node.js APIs by solving these two issues by listening to the event such that stopImmediatePropagation does not prevent the listener from running.

      Returns a disposable so that it may be unsubscribed from more easily.

      import { addAbortListener } from 'node:events';
      
      function example(signal) {
        let disposable;
        try {
          signal.addEventListener('abort', (e) => e.stopImmediatePropagation());
          disposable = addAbortListener(signal, (e) => {
            // Do something when signal is aborted.
          });
        } finally {
          disposable?.[Symbol.dispose]();
        }
      }
      
      @returns

      Disposable that removes the abort listener.

    • static from(
      iterable: Iterable<any, any, any> | AsyncIterable<any, any, any>,

      A utility method for creating Readable Streams out of iterators.

      @param iterable

      Object implementing the Symbol.asyncIterator or Symbol.iterator iterable protocol. Emits an 'error' event if a null value is passed.

      @param options

      Options provided to new stream.Readable([options]). By default, Readable.from() will set options.objectMode to true, unless this is explicitly opted out by setting options.objectMode to false.

    • static fromWeb(
      readableStream: ReadableStream,
      options?: Pick<ReadableOptions<Readable>, 'signal' | 'encoding' | 'highWaterMark' | 'objectMode'>

      A utility method for creating a Readable from a web ReadableStream.

    • emitter: EventEmitter<DefaultEventMap> | EventTarget,
      name: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      For EventEmitters this behaves exactly the same as calling .listeners on the emitter.

      For EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.

      import { getEventListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        const listener = () => console.log('Events are fun');
        ee.on('foo', listener);
        console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ]
      }
      {
        const et = new EventTarget();
        const listener = () => console.log('Events are fun');
        et.addEventListener('foo', listener);
        console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ]
      }
      
    • emitter: EventEmitter<DefaultEventMap> | EventTarget
      ): number;

      Returns the currently set max amount of listeners.

      For EventEmitters this behaves exactly the same as calling .getMaxListeners on the emitter.

      For EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.

      import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        console.log(getMaxListeners(ee)); // 10
        setMaxListeners(11, ee);
        console.log(getMaxListeners(ee)); // 11
      }
      {
        const et = new EventTarget();
        console.log(getMaxListeners(et)); // 10
        setMaxListeners(11, et);
        console.log(getMaxListeners(et)); // 11
      }
      
    • static isDisturbed(
      stream: Readable | ReadableStream
      ): boolean;

      Returns whether the stream has been read from or cancelled.

    • static on(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

      static on(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

    • static once(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
      static once(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
    • n?: number,
      ...eventTargets: EventEmitter<DefaultEventMap> | EventTarget[]
      ): void;
      import { setMaxListeners, EventEmitter } from 'node:events';
      
      const target = new EventTarget();
      const emitter = new EventEmitter();
      
      setMaxListeners(5, target, emitter);
      
      @param n

      A non-negative number. The maximum number of listeners per EventTarget event.

      @param eventTargets

      Zero or more {EventTarget} or {EventEmitter} instances. If none are specified, n is set as the default max for all newly created {EventTarget} and {EventEmitter} objects.

    • static toWeb(
      streamReadable: Readable,
      options?: { strategy: QueuingStrategy<any> }

      A utility method for creating a web ReadableStream from a Readable.

  • class Http2ServerResponse<Request extends Http2ServerRequest = Http2ServerRequest>

    This object is created internally by an HTTP server, not by the user. It is passed as the second parameter to the 'request' event.

    • readonly closed: boolean

      Is true after 'close' has been emitted.

    • destroyed: boolean

      Is true after writable.destroy() has been called.

    • readonly errored: null | Error

      Returns error if the stream has been destroyed with an error.

    • readonly headersSent: boolean

      True if headers were sent, false otherwise (read-only).

    • readonly req: Request

      A reference to the original HTTP2 request object.

    • sendDate: boolean

      When true, the Date header will be automatically generated and sent in the response if it is not already present in the headers. Defaults to true.

      This should only be disabled for testing; HTTP requires the Date header in responses.

    • readonly socket: Socket | TLSSocket

      Returns a Proxy object that acts as a net.Socket (or tls.TLSSocket) but applies getters, setters, and methods based on HTTP/2 logic.

      destroyed, readable, and writable properties will be retrieved from and set on response.stream.

      destroy, emit, end, on and once methods will be called on response.stream.

      setTimeout method will be called on response.stream.session.

      pause, read, resume, and write will throw an error with code ERR_HTTP2_NO_SOCKET_MANIPULATION. See Http2Session and Sockets for more information.

      All other interactions will be routed directly to the socket.

      import http2 from 'node:http2';
      const server = http2.createServer((req, res) => {
        const ip = req.socket.remoteAddress;
        const port = req.socket.remotePort;
        res.end(`Your IP address is ${ip} and your source port is ${port}.`);
      }).listen(3000);
      
    • statusCode: number

      When using implicit headers (not calling response.writeHead() explicitly), this property controls the status code that will be sent to the client when the headers get flushed.

      response.statusCode = 404;
      

      After response header was sent to the client, this property indicates the status code which was sent out.

    • statusMessage: ''

      Status message is not supported by HTTP/2 (RFC 7540 8.1.2.4). It returns an empty string.

    • readonly stream: ServerHttp2Stream

      The Http2Stream object backing the response.

    • readonly writable: boolean

      Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

    • readonly writableAborted: boolean

      Returns whether the stream was destroyed or errored before emitting 'finish'.

    • readonly writableCorked: number

      Number of times writable.uncork() needs to be called in order to fully uncork the stream.

    • readonly writableEnded: boolean

      Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

    • readonly writableFinished: boolean

      Is set to true immediately before the 'finish' event is emitted.

    • readonly writableHighWaterMark: number

      Return the value of highWaterMark passed when creating this Writable.

    • readonly writableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

    • readonly writableNeedDrain: boolean

      Is true if the stream's buffer has been full and stream will emit 'drain'.

    • readonly writableObjectMode: boolean

      Getter for the property objectMode of a given Writable stream.

    • static captureRejections: boolean

      Value: boolean

      Change the default captureRejections option on all new EventEmitter objects.

    • readonly static captureRejectionSymbol: typeof captureRejectionSymbol

      Value: Symbol.for('nodejs.rejection')

      See how to write a custom rejection handler.

    • static defaultMaxListeners: number

      By default, a maximum of 10 listeners can be registered for any single event. This limit can be changed for individual EventEmitter instances using the emitter.setMaxListeners(n) method. To change the default for allEventEmitter instances, the events.defaultMaxListeners property can be used. If this value is not a positive number, a RangeError is thrown.

      Take caution when setting the events.defaultMaxListeners because the change affects all EventEmitter instances, including those created before the change is made. However, calling emitter.setMaxListeners(n) still has precedence over events.defaultMaxListeners.

      This is not a hard limit. The EventEmitter instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any single EventEmitter, the emitter.getMaxListeners() and emitter.setMaxListeners() methods can be used to temporarily avoid this warning:

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.setMaxListeners(emitter.getMaxListeners() + 1);
      emitter.once('event', () => {
        // do stuff
        emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
      });
      

      The --trace-warnings command-line flag can be used to display the stack trace for such warnings.

      The emitted warning can be inspected with process.on('warning') and will have the additional emitter, type, and count properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Its name property is set to 'MaxListenersExceededWarning'.

    • readonly static errorMonitor: typeof errorMonitor

      This symbol shall be used to install a listener for only monitoring 'error' events. Listeners installed using this symbol are called before the regular 'error' listeners are called.

      Installing a listener using this symbol does not change the behavior once an 'error' event is emitted. Therefore, the process will still crash if no regular 'error' listener is installed.

    • callback: (error?: null | Error) => void
      ): void;
    • error: null | Error,
      callback: (error?: null | Error) => void
      ): void;
    • callback: (error?: null | Error) => void
      ): void;
    • chunk: any,
      encoding: BufferEncoding,
      callback: (error?: null | Error) => void
      ): void;
    • chunks: { chunk: any; encoding: BufferEncoding }[],
      callback: (error?: null | Error) => void
      ): void;
    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'close',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'drain',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'error',
      listener: (error: Error) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'finish',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
    • ): void;

      This method adds HTTP trailing headers (a header but at the end of the message) to the response.

      Attempting to set a header field name or value that contains invalid characters will result in a TypeError being thrown.

    • name: string,
      value: string | string[]
      ): void;

      Append a single header value to the header object.

      If the value is an array, this is equivalent to calling this method multiple times.

      If there were no previous values for the header, this is equivalent to calling setHeader.

      Attempting to set a header field name or value that contains invalid characters will result in a TypeError being thrown.

      // Returns headers including "set-cookie: a" and "set-cookie: b"
      const server = http2.createServer((req, res) => {
        res.setHeader('set-cookie', 'a');
        res.appendHeader('set-cookie', 'b');
        res.writeHead(200);
        res.end('ok');
      });
      
    • compose<T extends ReadableStream>(
      stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
      options?: { signal: AbortSignal }
      ): T;
    • cork(): void;

      The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

      The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

      See also: writable.uncork(), writable._writev().

    • callback: (err: null | Error, res: Http2ServerResponse) => void
      ): void;

      Call http2stream.pushStream() with the given headers, and wrap the given Http2Stream on a newly created Http2ServerResponse as the callback parameter if successful. When Http2ServerRequest is closed, the callback is called with an error ERR_HTTP2_INVALID_STREAM.

      @param headers

      An object describing the headers

      @param callback

      Called once http2stream.pushStream() is finished, or either when the attempt to create the pushed Http2Stream has failed or has been rejected, or the state of Http2ServerRequest is closed prior to calling the http2stream.pushStream() method

    • error?: Error
      ): this;

      Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the writable stream has ended and subsequent calls to write() or end() will result in an ERR_STREAM_DESTROYED error. This is a destructive and immediate way to destroy a stream. Previous calls to write() may not have drained, and may trigger an ERR_STREAM_DESTROYED error. Use end() instead of destroy if data should flush before close, or wait for the 'drain' event before destroying the stream.

      Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

      Implementors should not override this method, but instead implement writable._destroy().

      @param error

      Optional, an error to emit with 'error' event.

    • event: 'close'
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'drain'
      ): boolean;
      event: 'error',
      error: Error
      ): boolean;
      event: 'finish'
      ): boolean;
      event: 'pipe',
      ): boolean;
      event: 'unpipe',
      ): boolean;
      event: string | symbol,
      ...args: any[]
      ): boolean;
    • callback?: () => void
      ): this;

      This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method, response.end(), MUST be called on each response.

      If data is specified, it is equivalent to calling response.write(data, encoding) followed by response.end(callback).

      If callback is specified, it will be called when the response stream is finished.

      data: string | Uint8Array<ArrayBufferLike>,
      callback?: () => void
      ): this;

      This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method, response.end(), MUST be called on each response.

      If data is specified, it is equivalent to calling response.write(data, encoding) followed by response.end(callback).

      If callback is specified, it will be called when the response stream is finished.

      data: string | Uint8Array<ArrayBufferLike>,
      encoding: BufferEncoding,
      callback?: () => void
      ): this;

      This method signals to the server that all of the response headers and body have been sent; that server should consider this message complete. The method, response.end(), MUST be called on each response.

      If data is specified, it is equivalent to calling response.write(data, encoding) followed by response.end(callback).

      If callback is specified, it will be called when the response stream is finished.

    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • name: string
      ): string;

      Reads out a header that has already been queued but not sent to the client. The name is case-insensitive.

      const contentType = response.getHeader('content-type');
      
    • getHeaderNames(): string[];

      Returns an array containing the unique names of the current outgoing headers. All header names are lowercase.

      response.setHeader('Foo', 'bar');
      response.setHeader('Set-Cookie', ['foo=bar', 'bar=baz']);
      
      const headerNames = response.getHeaderNames();
      // headerNames === ['foo', 'set-cookie']
      
    • Returns a shallow copy of the current outgoing headers. Since a shallow copy is used, array values may be mutated without additional calls to various header-related http module methods. The keys of the returned object are the header names and the values are the respective header values. All header names are lowercase.

      The object returned by the response.getHeaders() method does not prototypically inherit from the JavaScript Object. This means that typical Object methods such as obj.toString(), obj.hasOwnProperty(), and others are not defined and will not work.

      response.setHeader('Foo', 'bar');
      response.setHeader('Set-Cookie', ['foo=bar', 'bar=baz']);
      
      const headers = response.getHeaders();
      // headers === { foo: 'bar', 'set-cookie': ['foo=bar', 'bar=baz'] }
      
    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • name: string
      ): boolean;

      Returns true if the header identified by name is currently set in the outgoing headers. The header name matching is case-insensitive.

      const hasContentType = response.hasHeader('content-type');
      
    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (error: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (error: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • pipe<T extends WritableStream>(
      destination: T,
      options?: { end: boolean }
      ): T;
    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (error: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (error: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • name: string
      ): void;

      Removes a header that has been queued for implicit sending.

      response.removeHeader('Content-Encoding');
      
    • event: 'close',
      listener: () => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • encoding: BufferEncoding
      ): this;

      The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

      @param encoding

      The new default encoding

    • name: string,
      value: string | number | readonly string[]
      ): void;

      Sets a single header value for implicit headers. If this header already exists in the to-be-sent headers, its value will be replaced. Use an array of strings here to send multiple headers with the same name.

      response.setHeader('Content-Type', 'text/html; charset=utf-8');
      

      or

      response.setHeader('Set-Cookie', ['type=ninja', 'language=javascript']);
      

      Attempting to set a header field name or value that contains invalid characters will result in a TypeError being thrown.

      When headers have been set with response.setHeader(), they will be merged with any headers passed to response.writeHead(), with the headers passed to response.writeHead() given precedence.

      // Returns content-type = text/plain
      const server = http2.createServer((req, res) => {
        res.setHeader('Content-Type', 'text/html; charset=utf-8');
        res.setHeader('X-Foo', 'bar');
        res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' });
        res.end('ok');
      });
      
    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • msecs: number,
      callback?: () => void
      ): void;

      Sets the Http2Stream's timeout value to msecs. If a callback is provided, then it is added as a listener on the 'timeout' event on the response object.

      If no 'timeout' listener is added to the request, the response, or the server, then Http2Stream s are destroyed when they time out. If a handler is assigned to the request, the response, or the server's 'timeout' events, timed out sockets must be handled explicitly.

    • uncork(): void;

      The writable.uncork() method flushes all data buffered since cork was called.

      When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

      stream.cork();
      stream.write('some ');
      stream.write('data ');
      process.nextTick(() => stream.uncork());
      

      If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

      stream.cork();
      stream.write('some ');
      stream.cork();
      stream.write('data ');
      process.nextTick(() => {
        stream.uncork();
        // The data will not be flushed until uncork() is called a second time.
        stream.uncork();
      });
      

      See also: writable.cork().

    • chunk: string | Uint8Array<ArrayBufferLike>,
      callback?: (err: Error) => void
      ): boolean;

      If this method is called and response.writeHead() has not been called, it will switch to implicit header mode and flush the implicit headers.

      This sends a chunk of the response body. This method may be called multiple times to provide successive parts of the body.

      In the node:http module, the response body is omitted when the request is a HEAD request. Similarly, the 204 and 304 responses must not include a message body.

      chunk can be a string or a buffer. If chunk is a string, the second parameter specifies how to encode it into a byte stream. By default the encoding is 'utf8'. callback will be called when this chunk of data is flushed.

      This is the raw HTTP body and has nothing to do with higher-level multi-part body encodings that may be used.

      The first time response.write() is called, it will send the buffered header information and the first chunk of the body to the client. The second time response.write() is called, Node.js assumes data will be streamed, and sends the new data separately. That is, the response is buffered up to the first chunk of the body.

      Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory.'drain' will be emitted when the buffer is free again.

      chunk: string | Uint8Array<ArrayBufferLike>,
      encoding: BufferEncoding,
      callback?: (err: Error) => void
      ): boolean;

      If this method is called and response.writeHead() has not been called, it will switch to implicit header mode and flush the implicit headers.

      This sends a chunk of the response body. This method may be called multiple times to provide successive parts of the body.

      In the node:http module, the response body is omitted when the request is a HEAD request. Similarly, the 204 and 304 responses must not include a message body.

      chunk can be a string or a buffer. If chunk is a string, the second parameter specifies how to encode it into a byte stream. By default the encoding is 'utf8'. callback will be called when this chunk of data is flushed.

      This is the raw HTTP body and has nothing to do with higher-level multi-part body encodings that may be used.

      The first time response.write() is called, it will send the buffered header information and the first chunk of the body to the client. The second time response.write() is called, Node.js assumes data will be streamed, and sends the new data separately. That is, the response is buffered up to the first chunk of the body.

      Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory.'drain' will be emitted when the buffer is free again.

    • writeContinue(): void;

      Sends a status 100 Continue to the client, indicating that the request body should be sent. See the 'checkContinue' event on Http2Server and Http2SecureServer.

    • hints: Record<string, string | string[]>
      ): void;

      Sends a status 103 Early Hints to the client with a Link header, indicating that the user agent can preload/preconnect the linked resources. The hints is an object containing the values of headers to be sent with early hints message.

      Example

      const earlyHintsLink = '</styles.css>; rel=preload; as=style';
      response.writeEarlyHints({
        'link': earlyHintsLink,
      });
      
      const earlyHintsLinks = [
        '</styles.css>; rel=preload; as=style',
        '</scripts.js>; rel=preload; as=script',
      ];
      response.writeEarlyHints({
        'link': earlyHintsLinks,
      });
      
    • statusCode: number,
      ): this;

      Sends a response header to the request. The status code is a 3-digit HTTP status code, like 404. The last argument, headers, are the response headers.

      Returns a reference to the Http2ServerResponse, so that calls can be chained.

      For compatibility with HTTP/1, a human-readable statusMessage may be passed as the second argument. However, because the statusMessage has no meaning within HTTP/2, the argument will have no effect and a process warning will be emitted.

      const body = 'hello world';
      response.writeHead(200, {
        'Content-Length': Buffer.byteLength(body),
        'Content-Type': 'text/plain; charset=utf-8',
      });
      

      Content-Length is given in bytes not characters. TheBuffer.byteLength() API may be used to determine the number of bytes in a given encoding. On outbound messages, Node.js does not check if Content-Length and the length of the body being transmitted are equal or not. However, when receiving messages, Node.js will automatically reject messages when the Content-Length does not match the actual payload size.

      This method may be called at most one time on a message before response.end() is called.

      If response.write() or response.end() are called before calling this, the implicit/mutable headers will be calculated and call this function.

      When headers have been set with response.setHeader(), they will be merged with any headers passed to response.writeHead(), with the headers passed to response.writeHead() given precedence.

      // Returns content-type = text/plain
      const server = http2.createServer((req, res) => {
        res.setHeader('Content-Type', 'text/html; charset=utf-8');
        res.setHeader('X-Foo', 'bar');
        res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' });
        res.end('ok');
      });
      

      Attempting to set a header field name or value that contains invalid characters will result in a TypeError being thrown.

      statusCode: number,
      statusMessage: string,
      ): this;

      Sends a response header to the request. The status code is a 3-digit HTTP status code, like 404. The last argument, headers, are the response headers.

      Returns a reference to the Http2ServerResponse, so that calls can be chained.

      For compatibility with HTTP/1, a human-readable statusMessage may be passed as the second argument. However, because the statusMessage has no meaning within HTTP/2, the argument will have no effect and a process warning will be emitted.

      const body = 'hello world';
      response.writeHead(200, {
        'Content-Length': Buffer.byteLength(body),
        'Content-Type': 'text/plain; charset=utf-8',
      });
      

      Content-Length is given in bytes not characters. TheBuffer.byteLength() API may be used to determine the number of bytes in a given encoding. On outbound messages, Node.js does not check if Content-Length and the length of the body being transmitted are equal or not. However, when receiving messages, Node.js will automatically reject messages when the Content-Length does not match the actual payload size.

      This method may be called at most one time on a message before response.end() is called.

      If response.write() or response.end() are called before calling this, the implicit/mutable headers will be calculated and call this function.

      When headers have been set with response.setHeader(), they will be merged with any headers passed to response.writeHead(), with the headers passed to response.writeHead() given precedence.

      // Returns content-type = text/plain
      const server = http2.createServer((req, res) => {
        res.setHeader('Content-Type', 'text/html; charset=utf-8');
        res.setHeader('X-Foo', 'bar');
        res.writeHead(200, { 'Content-Type': 'text/plain; charset=utf-8' });
        res.end('ok');
      });
      

      Attempting to set a header field name or value that contains invalid characters will result in a TypeError being thrown.

    • signal: AbortSignal,
      resource: (event: Event) => void
      ): Disposable;

      Listens once to the abort event on the provided signal.

      Listening to the abort event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can call e.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.

      This API allows safely using AbortSignals in Node.js APIs by solving these two issues by listening to the event such that stopImmediatePropagation does not prevent the listener from running.

      Returns a disposable so that it may be unsubscribed from more easily.

      import { addAbortListener } from 'node:events';
      
      function example(signal) {
        let disposable;
        try {
          signal.addEventListener('abort', (e) => e.stopImmediatePropagation());
          disposable = addAbortListener(signal, (e) => {
            // Do something when signal is aborted.
          });
        } finally {
          disposable?.[Symbol.dispose]();
        }
      }
      
      @returns

      Disposable that removes the abort listener.

    • static fromWeb(
      writableStream: WritableStream,
      options?: Pick<WritableOptions<Writable>, 'signal' | 'decodeStrings' | 'highWaterMark' | 'objectMode'>

      A utility method for creating a Writable from a web WritableStream.

    • emitter: EventEmitter<DefaultEventMap> | EventTarget,
      name: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      For EventEmitters this behaves exactly the same as calling .listeners on the emitter.

      For EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.

      import { getEventListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        const listener = () => console.log('Events are fun');
        ee.on('foo', listener);
        console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ]
      }
      {
        const et = new EventTarget();
        const listener = () => console.log('Events are fun');
        et.addEventListener('foo', listener);
        console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ]
      }
      
    • emitter: EventEmitter<DefaultEventMap> | EventTarget
      ): number;

      Returns the currently set max amount of listeners.

      For EventEmitters this behaves exactly the same as calling .getMaxListeners on the emitter.

      For EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.

      import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        console.log(getMaxListeners(ee)); // 10
        setMaxListeners(11, ee);
        console.log(getMaxListeners(ee)); // 11
      }
      {
        const et = new EventTarget();
        console.log(getMaxListeners(et)); // 10
        setMaxListeners(11, et);
        console.log(getMaxListeners(et)); // 11
      }
      
    • static on(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

      static on(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

    • static once(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
      static once(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
    • n?: number,
      ...eventTargets: EventEmitter<DefaultEventMap> | EventTarget[]
      ): void;
      import { setMaxListeners, EventEmitter } from 'node:events';
      
      const target = new EventTarget();
      const emitter = new EventEmitter();
      
      setMaxListeners(5, target, emitter);
      
      @param n

      A non-negative number. The maximum number of listeners per EventTarget event.

      @param eventTargets

      Zero or more {EventTarget} or {EventEmitter} instances. If none are specified, n is set as the default max for all newly created {EventTarget} and {EventEmitter} objects.

    • static toWeb(
      streamWritable: Writable

      A utility method for creating a web WritableStream from a Writable.

  • const sensitiveHeaders: symbol

    This symbol can be set as a property on the HTTP/2 headers object with an array value in order to provide a list of headers considered sensitive.

  • function connect(
    authority: string | URL,
    listener: (session: ClientHttp2Session, socket: Socket | TLSSocket) => void

    Returns a ClientHttp2Session instance.

    import http2 from 'node:http2';
    const client = http2.connect('https://localhost:1234');
    
    // Use the client
    
    client.close();
    
    @param authority

    The remote HTTP/2 server to connect to. This must be in the form of a minimal, valid URL with the http:// or https:// prefix, host name, and IP port (if a non-default port is used). Userinfo (user ID and password), path, querystring, and fragment details in the URL will be ignored.

    @param listener

    Will be registered as a one-time listener of the 'connect' event.

    function connect(
    authority: string | URL,
    listener?: (session: ClientHttp2Session, socket: Socket | TLSSocket) => void

    Returns a ClientHttp2Session instance.

    import http2 from 'node:http2';
    const client = http2.connect('https://localhost:1234');
    
    // Use the client
    
    client.close();
    
    @param authority

    The remote HTTP/2 server to connect to. This must be in the form of a minimal, valid URL with the http:// or https:// prefix, host name, and IP port (if a non-default port is used). Userinfo (user ID and password), path, querystring, and fragment details in the URL will be ignored.

    @param listener

    Will be registered as a one-time listener of the 'connect' event.

  • onRequestHandler?: (request: Http2ServerRequest, response: Http2ServerResponse) => void

    Returns a tls.Server instance that creates and manages Http2Session instances.

    import http2 from 'node:http2';
    import fs from 'node:fs';
    
    const options = {
      key: fs.readFileSync('server-key.pem'),
      cert: fs.readFileSync('server-cert.pem'),
    };
    
    // Create a secure HTTP/2 server
    const server = http2.createSecureServer(options);
    
    server.on('stream', (stream, headers) => {
      stream.respond({
        'content-type': 'text/html; charset=utf-8',
        ':status': 200,
      });
      stream.end('<h1>Hello World</h1>');
    });
    
    server.listen(8443);
    
    @param onRequestHandler

    See Compatibility API

    function createSecureServer<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>(
    options: SecureServerOptions<Http1Request, Http1Response, Http2Request, Http2Response>,
    onRequestHandler?: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
    ): Http2SecureServer<Http1Request, Http1Response, Http2Request, Http2Response>;

    Returns a tls.Server instance that creates and manages Http2Session instances.

    import http2 from 'node:http2';
    import fs from 'node:fs';
    
    const options = {
      key: fs.readFileSync('server-key.pem'),
      cert: fs.readFileSync('server-cert.pem'),
    };
    
    // Create a secure HTTP/2 server
    const server = http2.createSecureServer(options);
    
    server.on('stream', (stream, headers) => {
      stream.respond({
        'content-type': 'text/html; charset=utf-8',
        ':status': 200,
      });
      stream.end('<h1>Hello World</h1>');
    });
    
    server.listen(8443);
    
    @param onRequestHandler

    See Compatibility API

  • function createServer(
    onRequestHandler?: (request: Http2ServerRequest, response: Http2ServerResponse) => void

    Returns a net.Server instance that creates and manages Http2Session instances.

    Since there are no browsers known that support unencrypted HTTP/2, the use of createSecureServer is necessary when communicating with browser clients.

    import http2 from 'node:http2';
    
    // Create an unencrypted HTTP/2 server.
    // Since there are no browsers known that support
    // unencrypted HTTP/2, the use of `http2.createSecureServer()`
    // is necessary when communicating with browser clients.
    const server = http2.createServer();
    
    server.on('stream', (stream, headers) => {
      stream.respond({
        'content-type': 'text/html; charset=utf-8',
        ':status': 200,
      });
      stream.end('<h1>Hello World</h1>');
    });
    
    server.listen(8000);
    
    @param onRequestHandler

    See Compatibility API

    function createServer<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>(
    options: ServerOptions<Http1Request, Http1Response, Http2Request, Http2Response>,
    onRequestHandler?: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
    ): Http2Server<Http1Request, Http1Response, Http2Request, Http2Response>;

    Returns a net.Server instance that creates and manages Http2Session instances.

    Since there are no browsers known that support unencrypted HTTP/2, the use of createSecureServer is necessary when communicating with browser clients.

    import http2 from 'node:http2';
    
    // Create an unencrypted HTTP/2 server.
    // Since there are no browsers known that support
    // unencrypted HTTP/2, the use of `http2.createSecureServer()`
    // is necessary when communicating with browser clients.
    const server = http2.createServer();
    
    server.on('stream', (stream, headers) => {
      stream.respond({
        'content-type': 'text/html; charset=utf-8',
        ':status': 200,
      });
      stream.end('<h1>Hello World</h1>');
    });
    
    server.listen(8000);
    
    @param onRequestHandler

    See Compatibility API

  • Returns an object containing the default settings for an Http2Session instance. This method returns a new object instance every time it is called so instances returned may be safely modified for use.

  • settings: Settings
    ): Buffer;

    Returns a Buffer instance containing serialized representation of the given HTTP/2 settings as specified in the HTTP/2 specification. This is intended for use with the HTTP2-Settings header field.

    import http2 from 'node:http2';
    
    const packed = http2.getPackedSettings({ enablePush: false });
    
    console.log(packed.toString('base64'));
    // Prints: AAIAAAAA
    
  • Returns a HTTP/2 Settings Object containing the deserialized settings from the given Buffer as generated by http2.getPackedSettings().

    @param buf

    The packed settings.

  • function performServerHandshake<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>(
    socket: Duplex,
    options?: ServerOptions<Http1Request, Http1Response, Http2Request, Http2Response>
    ): ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>;

    Create an HTTP/2 server session from an existing socket.

    @param socket

    A Duplex Stream

    @param options

    Any {@link createServer} options can be provided.

Type definitions

  • interface AlternativeServiceOptions

  • interface ClientHttp2Session

    The EventEmitter class is defined and exposed by the node:events module:

    import { EventEmitter } from 'node:events';
    

    All EventEmitters emit the event 'newListener' when new listeners are added and 'removeListener' when existing listeners are removed.

    It supports the following option:

    • readonly alpnProtocol?: string

      Value will be undefined if the Http2Session is not yet connected to a socket, h2c if the Http2Session is not connected to a TLSSocket, or will return the value of the connected TLSSocket's own alpnProtocol property.

    • readonly closed: boolean

      Will be true if this Http2Session instance has been closed, otherwise false.

    • readonly connecting: boolean

      Will be true if this Http2Session instance is still connecting, will be set to false before emitting connect event and/or calling the http2.connect callback.

    • readonly destroyed: boolean

      Will be true if this Http2Session instance has been destroyed and must no longer be used, otherwise false.

    • readonly encrypted?: boolean

      Value is undefined if the Http2Session session socket has not yet been connected, true if the Http2Session is connected with a TLSSocket, and false if the Http2Session is connected to any other kind of socket or stream.

    • readonly localSettings: Settings

      A prototype-less object describing the current local settings of this Http2Session. The local settings are local to thisHttp2Session instance.

    • readonly originSet?: string[]

      If the Http2Session is connected to a TLSSocket, the originSet property will return an Array of origins for which the Http2Session may be considered authoritative.

      The originSet property is only available when using a secure TLS connection.

    • readonly pendingSettingsAck: boolean

      Indicates whether the Http2Session is currently waiting for acknowledgment of a sent SETTINGS frame. Will be true after calling the http2session.settings() method. Will be false once all sent SETTINGS frames have been acknowledged.

    • readonly remoteSettings: Settings

      A prototype-less object describing the current remote settings of thisHttp2Session. The remote settings are set by the connected HTTP/2 peer.

    • readonly socket: Socket | TLSSocket

      Returns a Proxy object that acts as a net.Socket (or tls.TLSSocket) but limits available methods to ones safe to use with HTTP/2.

      destroy, emit, end, pause, read, resume, and write will throw an error with code ERR_HTTP2_NO_SOCKET_MANIPULATION. See Http2Session and Sockets for more information.

      setTimeout method will be called on this Http2Session.

      All other interactions will be routed directly to the socket.

    • readonly state: SessionState

      Provides miscellaneous information about the current state of theHttp2Session.

      An object describing the current status of this Http2Session.

    • readonly type: number

      The http2session.type will be equal to http2.constants.NGHTTP2_SESSION_SERVER if this Http2Session instance is a server, and http2.constants.NGHTTP2_SESSION_CLIENT if the instance is a client.

    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'altsvc',
      listener: (alt: string, origin: string, stream: number) => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: 'origin',
      listener: (origins: string[]) => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: 'connect',
      listener: (session: ClientHttp2Session, socket: Socket | TLSSocket) => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: 'stream',
      listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.on(eventName, listener).

    • callback?: () => void
      ): void;

      Gracefully closes the Http2Session, allowing any existing streams to complete on their own and preventing new Http2Stream instances from being created. Once closed, http2session.destroy()might be called if there are no open Http2Stream instances.

      If specified, the callback function is registered as a handler for the'close' event.

    • error?: Error,
      code?: number
      ): void;

      Immediately terminates the Http2Session and the associated net.Socket or tls.TLSSocket.

      Once destroyed, the Http2Session will emit the 'close' event. If error is not undefined, an 'error' event will be emitted immediately before the 'close' event.

      If there are any remaining open Http2Streams associated with the Http2Session, those will also be destroyed.

      @param error

      An Error object if the Http2Session is being destroyed due to an error.

      @param code

      The HTTP/2 error code to send in the final GOAWAY frame. If unspecified, and error is not undefined, the default is INTERNAL_ERROR, otherwise defaults to NO_ERROR.

    • event: 'altsvc',
      alt: string,
      origin: string,
      stream: number
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'origin',
      origins: readonly string[]
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'connect',
      socket: Socket | TLSSocket
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'stream',
      flags: number
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: string | symbol,
      ...args: any[]
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • code?: number,
      lastStreamID?: number,
      opaqueData?: ArrayBufferView<ArrayBufferLike>
      ): void;

      Transmits a GOAWAY frame to the connected peer without shutting down theHttp2Session.

      @param code

      An HTTP/2 error code

      @param lastStreamID

      The numeric ID of the last processed Http2Stream

      @param opaqueData

      A TypedArray or DataView instance containing additional data to be carried within the GOAWAY frame.

    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'altsvc',
      listener: (alt: string, origin: string, stream: number) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'origin',
      listener: (origins: string[]) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'connect',
      listener: (session: ClientHttp2Session, socket: Socket | TLSSocket) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'stream',
      listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

    • event: 'altsvc',
      listener: (alt: string, origin: string, stream: number) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'origin',
      listener: (origins: string[]) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'connect',
      listener: (session: ClientHttp2Session, socket: Socket | TLSSocket) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'stream',
      listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

    • callback: (err: null | Error, duration: number, payload: Buffer) => void
      ): boolean;

      Sends a PING frame to the connected HTTP/2 peer. A callback function must be provided. The method will return true if the PING was sent, false otherwise.

      The maximum number of outstanding (unacknowledged) pings is determined by the maxOutstandingPings configuration option. The default maximum is 10.

      If provided, the payload must be a Buffer, TypedArray, or DataView containing 8 bytes of data that will be transmitted with the PING and returned with the ping acknowledgment.

      The callback will be invoked with three arguments: an error argument that will be null if the PING was successfully acknowledged, a duration argument that reports the number of milliseconds elapsed since the ping was sent and the acknowledgment was received, and a Buffer containing the 8-byte PING payload.

      session.ping(Buffer.from('abcdefgh'), (err, duration, payload) => {
        if (!err) {
          console.log(`Ping acknowledged in ${duration} milliseconds`);
          console.log(`With payload '${payload.toString()}'`);
        }
      });
      

      If the payload argument is not specified, the default payload will be the 64-bit timestamp (little endian) marking the start of the PING duration.

      payload: ArrayBufferView,
      callback: (err: null | Error, duration: number, payload: Buffer) => void
      ): boolean;
    • event: 'altsvc',
      listener: (alt: string, origin: string, stream: number) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'origin',
      listener: (origins: string[]) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'connect',
      listener: (session: ClientHttp2Session, socket: Socket | TLSSocket) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'stream',
      listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

    • event: 'altsvc',
      listener: (alt: string, origin: string, stream: number) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'origin',
      listener: (origins: string[]) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'connect',
      listener: (session: ClientHttp2Session, socket: Socket | TLSSocket) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'stream',
      listener: (stream: ClientHttp2Stream, headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • ref(): void;

      Calls ref() on this Http2Session instance's underlying net.Socket.

    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

    • For HTTP/2 Client Http2Session instances only, the http2session.request() creates and returns an Http2Stream instance that can be used to send an HTTP/2 request to the connected server.

      When a ClientHttp2Session is first created, the socket may not yet be connected. if clienthttp2session.request() is called during this time, the actual request will be deferred until the socket is ready to go. If the session is closed before the actual request be executed, an ERR_HTTP2_GOAWAY_SESSION is thrown.

      This method is only available if http2session.type is equal to http2.constants.NGHTTP2_SESSION_CLIENT.

      import http2 from 'node:http2';
      const clientSession = http2.connect('https://localhost:1234');
      const {
        HTTP2_HEADER_PATH,
        HTTP2_HEADER_STATUS,
      } = http2.constants;
      
      const req = clientSession.request({ [HTTP2_HEADER_PATH]: '/' });
      req.on('response', (headers) => {
        console.log(headers[HTTP2_HEADER_STATUS]);
        req.on('data', (chunk) => { // ..  });
        req.on('end', () => { // ..  });
      });
      

      When the options.waitForTrailers option is set, the 'wantTrailers' event is emitted immediately after queuing the last chunk of payload data to be sent. The http2stream.sendTrailers() method can then be called to send trailing headers to the peer.

      When options.waitForTrailers is set, the Http2Stream will not automatically close when the final DATA frame is transmitted. User code must call eitherhttp2stream.sendTrailers() or http2stream.close() to close theHttp2Stream.

      When options.signal is set with an AbortSignal and then abort on the corresponding AbortController is called, the request will emit an 'error'event with an AbortError error.

      The :method and :path pseudo-headers are not specified within headers, they respectively default to:

      • :method = 'GET'
      • :path = /
    • windowSize: number
      ): void;

      Sets the local endpoint's window size. The windowSize is the total window size to set, not the delta.

      import http2 from 'node:http2';
      
      const server = http2.createServer();
      const expectedWindowSize = 2 ** 20;
      server.on('connect', (session) => {
      
        // Set local window size to be 2 ** 20
        session.setLocalWindowSize(expectedWindowSize);
      });
      
    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • msecs: number,
      callback?: () => void
      ): void;

      Used to set a callback function that is called when there is no activity on the Http2Session after msecs milliseconds. The given callback is registered as a listener on the 'timeout' event.

    • settings: Settings,
      callback?: (err: null | Error, settings: Settings, duration: number) => void
      ): void;

      Updates the current local settings for this Http2Session and sends a new SETTINGS frame to the connected HTTP/2 peer.

      Once called, the http2session.pendingSettingsAck property will be true while the session is waiting for the remote peer to acknowledge the new settings.

      The new settings will not become effective until the SETTINGS acknowledgment is received and the 'localSettings' event is emitted. It is possible to send multiple SETTINGS frames while acknowledgment is still pending.

      @param callback

      Callback that is called once the session is connected or right away if the session is already connected.

    • unref(): void;

      Calls unref() on this Http2Sessioninstance's underlying net.Socket.

  • interface ClientHttp2Stream

    Duplex streams are streams that implement both the Readable and Writable interfaces.

    Examples of Duplex streams include:

    • TCP sockets
    • zlib streams
    • crypto streams
    • readonly aborted: boolean

      Set to true if the Http2Stream instance was aborted abnormally. When set, the 'aborted' event will have been emitted.

    • allowHalfOpen: boolean

      If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

      This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

    • readonly bufferSize: number

      This property shows the number of characters currently buffered to be written. See net.Socket.bufferSize for details.

    • readonly closed: boolean

      Set to true if the Http2Stream instance has been closed.

    • readonly destroyed: boolean

      Set to true if the Http2Stream instance has been destroyed and is no longer usable.

    • readonly endAfterHeaders: boolean

      Set to true if the END_STREAM flag was set in the request or response HEADERS frame received, indicating that no additional data should be received and the readable side of the Http2Stream will be closed.

    • readonly errored: null | Error

      Returns error if the stream has been destroyed with an error.

    • readonly id?: number

      The numeric stream identifier of this Http2Stream instance. Set to undefined if the stream identifier has not yet been assigned.

    • readonly pending: boolean

      Set to true if the Http2Stream instance has not yet been assigned a numeric stream identifier.

    • readable: boolean

      Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

    • readonly readableAborted: boolean

      Returns whether the stream was destroyed or errored before emitting 'end'.

    • readonly readableDidRead: boolean

      Returns whether 'data' has been emitted.

    • readonly readableEncoding: null | BufferEncoding

      Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

    • readonly readableEnded: boolean

      Becomes true when 'end' event is emitted.

    • readonly readableFlowing: null | boolean

      This property reflects the current state of a Readable stream as described in the Three states section.

    • readonly readableHighWaterMark: number

      Returns the value of highWaterMark passed when creating this Readable.

    • readonly readableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

    • readonly readableObjectMode: boolean

      Getter for the property objectMode of a given Readable stream.

    • readonly rstCode: number

      Set to the RST_STREAM error code reported when the Http2Stream is destroyed after either receiving an RST_STREAM frame from the connected peer, calling http2stream.close(), or http2stream.destroy(). Will be undefined if the Http2Stream has not been closed.

    • readonly sentHeaders: OutgoingHttpHeaders

      An object containing the outbound headers sent for this Http2Stream.

    • readonly sentInfoHeaders?: OutgoingHttpHeaders[]

      An array of objects containing the outbound informational (additional) headers sent for this Http2Stream.

    • readonly sentTrailers?: OutgoingHttpHeaders

      An object containing the outbound trailers sent for this HttpStream.

    • readonly session: undefined | Http2Session

      A reference to the Http2Session instance that owns this Http2Stream. The value will be undefined after the Http2Stream instance is destroyed.

    • readonly state: StreamState

      Provides miscellaneous information about the current state of the Http2Stream.

      A current state of this Http2Stream.

    • readonly writable: boolean

      Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

    • readonly writableAborted: boolean

      Returns whether the stream was destroyed or errored before emitting 'finish'.

    • readonly writableCorked: number

      Number of times writable.uncork() needs to be called in order to fully uncork the stream.

    • readonly writableEnded: boolean

      Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

    • readonly writableFinished: boolean

      Is set to true immediately before the 'finish' event is emitted.

    • readonly writableHighWaterMark: number

      Return the value of highWaterMark passed when creating this Writable.

    • readonly writableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

    • readonly writableNeedDrain: boolean

      Is true if the stream's buffer has been full and stream will emit 'drain'.

    • readonly writableObjectMode: boolean

      Getter for the property objectMode of a given Writable stream.

    • callback: (error?: null | Error) => void
      ): void;
    • error: null | Error,
      callback: (error?: null | Error) => void
      ): void;
    • callback: (error?: null | Error) => void
      ): void;
    • size: number
      ): void;
    • chunk: any,
      encoding: BufferEncoding,
      callback: (error?: null | Error) => void
      ): void;
    • chunks: { chunk: any; encoding: BufferEncoding }[],
      callback: (error?: null | Error) => void
      ): void;
    • [Symbol.asyncDispose](): Promise<void>;

      Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

    • [Symbol.asyncIterator](): AsyncIterator<any>;
    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'continue',
      listener: () => {}
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'headers',
      listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'push',
      listener: (headers: IncomingHttpHeaders, flags: number) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'response',
      listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
    • options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

      @returns

      a stream of indexed pairs.

    • code?: number,
      callback?: () => void
      ): void;

      Closes the Http2Stream instance by sending an RST_STREAM frame to the connected HTTP/2 peer.

      @param code

      Unsigned 32-bit integer identifying the error code.

      @param callback

      An optional function registered to listen for the 'close' event.

    • compose<T extends ReadableStream>(
      stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
      options?: { signal: AbortSignal }
      ): T;
    • cork(): void;

      The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

      The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

      See also: writable.uncork(), writable._writev().

    • error?: Error
      ): this;

      Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

      Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

      Implementors should not override this method, but instead implement readable._destroy().

      @param error

      Error which will be passed as payload in 'error' event

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks dropped from the start.

      @param limit

      the number of chunks to drop from the readable.

      @returns

      a stream with limit chunks dropped from the start.

    • event: 'continue'
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'headers',
      flags: number
      ): boolean;
      event: 'push',
      flags: number
      ): boolean;
      event: 'response',
      flags: number
      ): boolean;
      event: string | symbol,
      ...args: any[]
      ): boolean;
    • cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      chunk: any,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      chunk: any,
      encoding: BufferEncoding,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding if chunk is a string

    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for every one of the chunks.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions

      This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

      @param fn

      a function to filter chunks from the stream. Async or not.

      @returns

      a stream filtered with the predicate fn.

    • find<T>(
      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
      options?: ArrayOptions
      ): Promise<undefined | T>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<any>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

      It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

      @param fn

      a function to map over every chunk in the stream. May be async. May be a stream or generator.

      @returns

      a stream flat-mapped with the function fn.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
      options?: ArrayOptions
      ): Promise<void>;

      This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

      This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

      This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise for when the stream has finished.

    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • isPaused(): boolean;

      The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

      const readable = new stream.Readable();
      
      readable.isPaused(); // === false
      readable.pause();
      readable.isPaused(); // === true
      readable.resume();
      readable.isPaused(); // === false
      
    • options?: { destroyOnReturn: boolean }
      ): AsyncIterator<any>;

      The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

      @param fn

      a function to map over every chunk in the stream. Async or not.

      @returns

      a stream mapped with the function fn.

    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'continue',
      listener: () => {}
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'headers',
      listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;
      event: 'push',
      listener: (headers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'response',
      listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'continue',
      listener: () => {}
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'headers',
      listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;
      event: 'push',
      listener: (headers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'response',
      listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • pause(): this;

      The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

      const readable = getReadableStreamSomehow();
      readable.on('data', (chunk) => {
        console.log(`Received ${chunk.length} bytes of data.`);
        readable.pause();
        console.log('There will be no additional data for 1 second.');
        setTimeout(() => {
          console.log('Now data will start flowing again.');
          readable.resume();
        }, 1000);
      });
      

      The readable.pause() method has no effect if there is a 'readable' event listener.

    • pipe<T extends WritableStream>(
      destination: T,
      options?: { end: boolean }
      ): T;
    • event: 'continue',
      listener: () => {}
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'headers',
      listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;
      event: 'push',
      listener: (headers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'response',
      listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'continue',
      listener: () => {}
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'headers',
      listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;
      event: 'push',
      listener: (headers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'response',
      listener: (headers: IncomingHttpHeaders & IncomingHttpStatusHeader, flags: number) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • ): void;

      Updates the priority for this Http2Stream instance.

    • chunk: any,
      encoding?: BufferEncoding
      ): boolean;
    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • size?: number
      ): any;

      The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

      The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

      If the size argument is not specified, all of the data contained in the internal buffer will be returned.

      The size argument must be less than or equal to 1 GiB.

      The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

      const readable = getReadableStreamSomehow();
      
      // 'readable' may be triggered multiple times as data is buffered in
      readable.on('readable', () => {
        let chunk;
        console.log('Stream is readable (new data received in buffer)');
        // Use a loop to make sure we read all currently available data
        while (null !== (chunk = readable.read())) {
          console.log(`Read ${chunk.length} bytes of data...`);
        }
      });
      
      // 'end' will be triggered once when there is no more data available
      readable.on('end', () => {
        console.log('Reached end of stream.');
      });
      

      Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

      Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

      const chunks = [];
      
      readable.on('readable', () => {
        let chunk;
        while (null !== (chunk = readable.read())) {
          chunks.push(chunk);
        }
      });
      
      readable.on('end', () => {
        const content = chunks.join('');
      });
      

      A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

      If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

      Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

      @param size

      Optional argument to specify how much data to read.

    • reduce<T = any>(
      fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial?: undefined,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

      reduce<T = any>(
      fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial: T,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • event: 'close',
      listener: () => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • resume(): this;

      The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

      The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

      getReadableStreamSomehow()
        .resume()
        .on('end', () => {
          console.log('Reached the end, but did not read anything.');
        });
      

      The readable.resume() method has no effect if there is a 'readable' event listener.

    • ): void;

      Sends a trailing HEADERS frame to the connected HTTP/2 peer. This method will cause the Http2Stream to be immediately closed and must only be called after the 'wantTrailers' event has been emitted. When sending a request or sending a response, the options.waitForTrailers option must be set in order to keep the Http2Stream open after the final DATA frame so that trailers can be sent.

      import http2 from 'node:http2';
      const server = http2.createServer();
      server.on('stream', (stream) => {
        stream.respond(undefined, { waitForTrailers: true });
        stream.on('wantTrailers', () => {
          stream.sendTrailers({ xyz: 'abc' });
        });
        stream.end('Hello World');
      });
      

      The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-header fields (e.g. ':method', ':path', etc).

    • encoding: BufferEncoding
      ): this;

      The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

      @param encoding

      The new default encoding

    • encoding: BufferEncoding
      ): this;

      The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

      By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

      The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

      const readable = getReadableStreamSomehow();
      readable.setEncoding('utf8');
      readable.on('data', (chunk) => {
        assert.equal(typeof chunk, 'string');
        console.log('Got %d characters of string data:', chunk.length);
      });
      
      @param encoding

      The encoding to use.

    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • msecs: number,
      callback?: () => void
      ): void;
      import http2 from 'node:http2';
      const client = http2.connect('http://example.org:8000');
      const { NGHTTP2_CANCEL } = http2.constants;
      const req = client.request({ ':path': '/' });
      
      // Cancel the stream if there's no activity after 5 seconds
      req.setTimeout(5000, () => req.close(NGHTTP2_CANCEL));
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks.

      @param limit

      the number of chunks to take from the readable.

      @returns

      a stream with limit chunks taken.

    • options?: Pick<ArrayOptions, 'signal'>
      ): Promise<any[]>;

      This method allows easily obtaining the contents of a stream.

      As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

      @returns

      a promise containing an array with the contents of the stream.

    • uncork(): void;

      The writable.uncork() method flushes all data buffered since cork was called.

      When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

      stream.cork();
      stream.write('some ');
      stream.write('data ');
      process.nextTick(() => stream.uncork());
      

      If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

      stream.cork();
      stream.write('some ');
      stream.cork();
      stream.write('data ');
      process.nextTick(() => {
        stream.uncork();
        // The data will not be flushed until uncork() is called a second time.
        stream.uncork();
      });
      

      See also: writable.cork().

    • destination?: WritableStream
      ): this;

      The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

      If the destination is not specified, then all pipes are detached.

      If the destination is specified, but no pipe is set up for it, then the method does nothing.

      import fs from 'node:fs';
      const readable = getReadableStreamSomehow();
      const writable = fs.createWriteStream('file.txt');
      // All the data from readable goes into 'file.txt',
      // but only for the first second.
      readable.pipe(writable);
      setTimeout(() => {
        console.log('Stop writing to file.txt.');
        readable.unpipe(writable);
        console.log('Manually close the file stream.');
        writable.end();
      }, 1000);
      
      @param destination

      Optional specific stream to unpipe

    • chunk: any,
      encoding?: BufferEncoding
      ): void;

      Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

      The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

      The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

      Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

      // Pull off a header delimited by \n\n.
      // Use unshift() if we get too much.
      // Call the callback with (error, header, stream).
      import { StringDecoder } from 'node:string_decoder';
      function parseHeader(stream, callback) {
        stream.on('error', callback);
        stream.on('readable', onReadable);
        const decoder = new StringDecoder('utf8');
        let header = '';
        function onReadable() {
          let chunk;
          while (null !== (chunk = stream.read())) {
            const str = decoder.write(chunk);
            if (str.includes('\n\n')) {
              // Found the header boundary.
              const split = str.split(/\n\n/);
              header += split.shift();
              const remaining = split.join('\n\n');
              const buf = Buffer.from(remaining, 'utf8');
              stream.removeListener('error', callback);
              // Remove the 'readable' listener before unshifting.
              stream.removeListener('readable', onReadable);
              if (buf.length)
                stream.unshift(buf);
              // Now the body of the message can be read from the stream.
              callback(null, header, stream);
              return;
            }
            // Still reading the header.
            header += str;
          }
        }
      }
      

      Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

      @param chunk

      Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

      @param encoding

      Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

    • stream: ReadableStream
      ): this;

      Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

      When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

      It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

      import { OldReader } from './old-api-module.js';
      import { Readable } from 'node:stream';
      const oreader = new OldReader();
      const myReader = new Readable().wrap(oreader);
      
      myReader.on('readable', () => {
        myReader.read(); // etc.
      });
      
      @param stream

      An "old style" readable stream

    • chunk: any,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

      chunk: any,
      encoding: BufferEncoding,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding, if chunk is a string.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

  • interface ClientSessionOptions

  • interface ClientSessionRequestOptions

  • interface Http2SecureServer<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>

    Accepts encrypted connections using TLS or SSL.

    • connections: number
    • readonly listening: boolean

      Indicates whether or not the server is listening for connections.

    • maxConnections: number

      Set this property to reject connections when the server's connection count gets high.

      It is not recommended to use this option once a socket has been sent to a child with child_process.fork().

    • [Symbol.asyncDispose](): Promise<void>;

      Calls () and returns a promise that fulfills when the server has closed.

    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • hostname: string,
      ): void;

      The server.addContext() method adds a secure context that will be used if the client request's SNI name matches the supplied hostname (or wildcard).

      When there are multiple matching contexts, the most recently added one is used.

      @param hostname

      A SNI host name or wildcard (e.g. '*')

      @param context

      An object containing any of the possible properties from the createSecureContext options arguments (e.g. key, cert, ca, etc), or a TLS context object created with createSecureContext itself.

    • event: 'checkContinue',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;

      events.EventEmitter

      1. tlsClientError
      2. newSession
      3. OCSPRequest
      4. resumeSession
      5. secureConnection
      6. keylog
      event: 'request',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;

      events.EventEmitter

      1. tlsClientError
      2. newSession
      3. OCSPRequest
      4. resumeSession
      5. secureConnection
      6. keylog
      event: 'session',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void
      ): this;

      events.EventEmitter

      1. tlsClientError
      2. newSession
      3. OCSPRequest
      4. resumeSession
      5. secureConnection
      6. keylog
      event: 'sessionError',
      listener: (err: Error) => void
      ): this;

      events.EventEmitter

      1. tlsClientError
      2. newSession
      3. OCSPRequest
      4. resumeSession
      5. secureConnection
      6. keylog
      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;

      events.EventEmitter

      1. tlsClientError
      2. newSession
      3. OCSPRequest
      4. resumeSession
      5. secureConnection
      6. keylog
      event: 'timeout',
      listener: () => void
      ): this;

      events.EventEmitter

      1. tlsClientError
      2. newSession
      3. OCSPRequest
      4. resumeSession
      5. secureConnection
      6. keylog
      event: 'unknownProtocol',
      listener: (socket: TLSSocket) => void
      ): this;

      events.EventEmitter

      1. tlsClientError
      2. newSession
      3. OCSPRequest
      4. resumeSession
      5. secureConnection
      6. keylog
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      events.EventEmitter

      1. tlsClientError
      2. newSession
      3. OCSPRequest
      4. resumeSession
      5. secureConnection
      6. keylog
    • address(): null | string | AddressInfo;

      Returns the bound address, the address family name, and port of the server as reported by the operating system if listening on an IP socket (useful to find which port was assigned when getting an OS-assigned address):{ port: 12346, family: 'IPv4', address: '127.0.0.1' }.

      For a server listening on a pipe or Unix domain socket, the name is returned as a string.

      const server = net.createServer((socket) => {
        socket.end('goodbye\n');
      }).on('error', (err) => {
        // Handle errors here.
        throw err;
      });
      
      // Grab an arbitrary unused port.
      server.listen(() => {
        console.log('opened server on', server.address());
      });
      

      server.address() returns null before the 'listening' event has been emitted or after calling server.close().

    • callback?: (err?: Error) => void
      ): this;

      Stops the server from accepting new connections and keeps existing connections. This function is asynchronous, the server is finally closed when all connections are ended and the server emits a 'close' event. The optional callback will be called once the 'close' event occurs. Unlike that event, it will be called with an Error as its only argument if the server was not open when it was closed.

      @param callback

      Called when the server is closed.

    • event: 'checkContinue',
      request: InstanceType<Http2Request>,
      response: InstanceType<Http2Response>
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'request',
      request: InstanceType<Http2Request>,
      response: InstanceType<Http2Response>
      ): boolean;
      event: 'session',
      session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>
      ): boolean;
      event: 'sessionError',
      err: Error
      ): boolean;
      event: 'stream',
      flags: number
      ): boolean;
      event: 'timeout'
      ): boolean;
      event: 'unknownProtocol',
      socket: TLSSocket
      ): boolean;
      event: string | symbol,
      ...args: any[]
      ): boolean;
    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • cb: (error: null | Error, count: number) => void
      ): this;

      Asynchronously get the number of concurrent connections on the server. Works when sockets were sent to forks.

      Callback should take two arguments err and count.

    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • Returns the session ticket keys.

      See Session Resumption for more information.

      @returns

      A 48-byte buffer containing the session ticket keys.

    • port?: number,
      hostname?: string,
      backlog?: number,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      port?: number,
      hostname?: string,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      port?: number,
      backlog?: number,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      port?: number,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      path: string,
      backlog?: number,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      path: string,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      options: ListenOptions,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      handle: any,
      backlog?: number,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      handle: any,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'checkContinue',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'request',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;
      event: 'session',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void
      ): this;
      event: 'sessionError',
      listener: (err: Error) => void
      ): this;
      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: 'unknownProtocol',
      listener: (socket: TLSSocket) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'checkContinue',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'request',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;
      event: 'session',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void
      ): this;
      event: 'sessionError',
      listener: (err: Error) => void
      ): this;
      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: 'unknownProtocol',
      listener: (socket: TLSSocket) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'checkContinue',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'request',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;
      event: 'session',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void
      ): this;
      event: 'sessionError',
      listener: (err: Error) => void
      ): this;
      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: 'unknownProtocol',
      listener: (socket: TLSSocket) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'checkContinue',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'request',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;
      event: 'session',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void
      ): this;
      event: 'sessionError',
      listener: (err: Error) => void
      ): this;
      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: 'unknownProtocol',
      listener: (socket: TLSSocket) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • ref(): this;

      Opposite of unref(), calling ref() on a previously unrefed server will not let the program exit if it's the only server left (the default behavior). If the server is refed calling ref() again will have no effect.

    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • ): void;

      The server.setSecureContext() method replaces the secure context of an existing server. Existing connections to the server are not interrupted.

      @param options

      An object containing any of the possible properties from the createSecureContext options arguments (e.g. key, cert, ca, etc).

    • keys: Buffer
      ): void;

      Sets the session ticket keys.

      Changes to the ticket keys are effective only for future server connections. Existing or currently pending server connections will use the previous keys.

      See Session Resumption for more information.

      @param keys

      A 48-byte buffer containing the session ticket keys.

    • msec?: number,
      callback?: () => void
      ): this;
    • unref(): this;

      Calling unref() on a server will allow the program to exit if this is the only active server in the event system. If the server is already unrefed callingunref() again will have no effect.

    • settings: Settings
      ): void;

      Throws ERR_HTTP2_INVALID_SETTING_VALUE for invalid settings values. Throws ERR_INVALID_ARG_TYPE for invalid settings argument.

  • interface Http2Server<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>

    This class is used to create a TCP or IPC server.

    • connections: number
    • readonly listening: boolean

      Indicates whether or not the server is listening for connections.

    • maxConnections: number

      Set this property to reject connections when the server's connection count gets high.

      It is not recommended to use this option once a socket has been sent to a child with child_process.fork().

    • [Symbol.asyncDispose](): Promise<void>;

      Calls () and returns a promise that fulfills when the server has closed.

    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'checkContinue',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;

      events.EventEmitter

      1. close
      2. connection
      3. error
      4. listening
      5. drop
      event: 'request',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;

      events.EventEmitter

      1. close
      2. connection
      3. error
      4. listening
      5. drop
      event: 'session',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void
      ): this;

      events.EventEmitter

      1. close
      2. connection
      3. error
      4. listening
      5. drop
      event: 'sessionError',
      listener: (err: Error) => void
      ): this;

      events.EventEmitter

      1. close
      2. connection
      3. error
      4. listening
      5. drop
      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;

      events.EventEmitter

      1. close
      2. connection
      3. error
      4. listening
      5. drop
      event: 'timeout',
      listener: () => void
      ): this;

      events.EventEmitter

      1. close
      2. connection
      3. error
      4. listening
      5. drop
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      events.EventEmitter

      1. close
      2. connection
      3. error
      4. listening
      5. drop
    • address(): null | string | AddressInfo;

      Returns the bound address, the address family name, and port of the server as reported by the operating system if listening on an IP socket (useful to find which port was assigned when getting an OS-assigned address):{ port: 12346, family: 'IPv4', address: '127.0.0.1' }.

      For a server listening on a pipe or Unix domain socket, the name is returned as a string.

      const server = net.createServer((socket) => {
        socket.end('goodbye\n');
      }).on('error', (err) => {
        // Handle errors here.
        throw err;
      });
      
      // Grab an arbitrary unused port.
      server.listen(() => {
        console.log('opened server on', server.address());
      });
      

      server.address() returns null before the 'listening' event has been emitted or after calling server.close().

    • callback?: (err?: Error) => void
      ): this;

      Stops the server from accepting new connections and keeps existing connections. This function is asynchronous, the server is finally closed when all connections are ended and the server emits a 'close' event. The optional callback will be called once the 'close' event occurs. Unlike that event, it will be called with an Error as its only argument if the server was not open when it was closed.

      @param callback

      Called when the server is closed.

    • event: 'checkContinue',
      request: InstanceType<Http2Request>,
      response: InstanceType<Http2Response>
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'request',
      request: InstanceType<Http2Request>,
      response: InstanceType<Http2Response>
      ): boolean;
      event: 'session',
      session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>
      ): boolean;
      event: 'sessionError',
      err: Error
      ): boolean;
      event: 'stream',
      flags: number
      ): boolean;
      event: 'timeout'
      ): boolean;
      event: string | symbol,
      ...args: any[]
      ): boolean;
    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • cb: (error: null | Error, count: number) => void
      ): this;

      Asynchronously get the number of concurrent connections on the server. Works when sockets were sent to forks.

      Callback should take two arguments err and count.

    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • port?: number,
      hostname?: string,
      backlog?: number,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      port?: number,
      hostname?: string,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      port?: number,
      backlog?: number,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      port?: number,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      path: string,
      backlog?: number,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      path: string,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      options: ListenOptions,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      handle: any,
      backlog?: number,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
      handle: any,
      listeningListener?: () => void
      ): this;

      Start a server listening for connections. A net.Server can be a TCP or an IPC server depending on what it listens to.

      Possible signatures:

      • server.listen(handle[, backlog][, callback])
      • server.listen(options[, callback])
      • server.listen(path[, backlog][, callback]) for IPC servers
      • server.listen([port[, host[, backlog]]][, callback]) for TCP servers

      This function is asynchronous. When the server starts listening, the 'listening' event will be emitted. The last parameter callbackwill be added as a listener for the 'listening' event.

      All listen() methods can take a backlog parameter to specify the maximum length of the queue of pending connections. The actual length will be determined by the OS through sysctl settings such as tcp_max_syn_backlog and somaxconn on Linux. The default value of this parameter is 511 (not 512).

      All Socket are set to SO_REUSEADDR (see socket(7) for details).

      The server.listen() method can be called again if and only if there was an error during the first server.listen() call or server.close() has been called. Otherwise, an ERR_SERVER_ALREADY_LISTEN error will be thrown.

      One of the most common errors raised when listening is EADDRINUSE. This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retry after a certain amount of time:

      server.on('error', (e) => {
        if (e.code === 'EADDRINUSE') {
          console.error('Address in use, retrying...');
          setTimeout(() => {
            server.close();
            server.listen(PORT, HOST);
          }, 1000);
        }
      });
      
    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'checkContinue',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'request',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;
      event: 'session',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void
      ): this;
      event: 'sessionError',
      listener: (err: Error) => void
      ): this;
      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'checkContinue',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'request',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;
      event: 'session',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void
      ): this;
      event: 'sessionError',
      listener: (err: Error) => void
      ): this;
      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'checkContinue',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'request',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;
      event: 'session',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void
      ): this;
      event: 'sessionError',
      listener: (err: Error) => void
      ): this;
      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'checkContinue',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'request',
      listener: (request: InstanceType<Http2Request>, response: InstanceType<Http2Response>) => void
      ): this;
      event: 'session',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>) => void
      ): this;
      event: 'sessionError',
      listener: (err: Error) => void
      ): this;
      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • ref(): this;

      Opposite of unref(), calling ref() on a previously unrefed server will not let the program exit if it's the only server left (the default behavior). If the server is refed calling ref() again will have no effect.

    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • msec?: number,
      callback?: () => void
      ): this;
    • unref(): this;

      Calling unref() on a server will allow the program to exit if this is the only active server in the event system. If the server is already unrefed callingunref() again will have no effect.

    • settings: Settings
      ): void;

      Throws ERR_HTTP2_INVALID_SETTING_VALUE for invalid settings values. Throws ERR_INVALID_ARG_TYPE for invalid settings argument.

  • interface Http2Session

    The EventEmitter class is defined and exposed by the node:events module:

    import { EventEmitter } from 'node:events';
    

    All EventEmitters emit the event 'newListener' when new listeners are added and 'removeListener' when existing listeners are removed.

    It supports the following option:

    • readonly alpnProtocol?: string

      Value will be undefined if the Http2Session is not yet connected to a socket, h2c if the Http2Session is not connected to a TLSSocket, or will return the value of the connected TLSSocket's own alpnProtocol property.

    • readonly closed: boolean

      Will be true if this Http2Session instance has been closed, otherwise false.

    • readonly connecting: boolean

      Will be true if this Http2Session instance is still connecting, will be set to false before emitting connect event and/or calling the http2.connect callback.

    • readonly destroyed: boolean

      Will be true if this Http2Session instance has been destroyed and must no longer be used, otherwise false.

    • readonly encrypted?: boolean

      Value is undefined if the Http2Session session socket has not yet been connected, true if the Http2Session is connected with a TLSSocket, and false if the Http2Session is connected to any other kind of socket or stream.

    • readonly localSettings: Settings

      A prototype-less object describing the current local settings of this Http2Session. The local settings are local to thisHttp2Session instance.

    • readonly originSet?: string[]

      If the Http2Session is connected to a TLSSocket, the originSet property will return an Array of origins for which the Http2Session may be considered authoritative.

      The originSet property is only available when using a secure TLS connection.

    • readonly pendingSettingsAck: boolean

      Indicates whether the Http2Session is currently waiting for acknowledgment of a sent SETTINGS frame. Will be true after calling the http2session.settings() method. Will be false once all sent SETTINGS frames have been acknowledged.

    • readonly remoteSettings: Settings

      A prototype-less object describing the current remote settings of thisHttp2Session. The remote settings are set by the connected HTTP/2 peer.

    • readonly socket: Socket | TLSSocket

      Returns a Proxy object that acts as a net.Socket (or tls.TLSSocket) but limits available methods to ones safe to use with HTTP/2.

      destroy, emit, end, pause, read, resume, and write will throw an error with code ERR_HTTP2_NO_SOCKET_MANIPULATION. See Http2Session and Sockets for more information.

      setTimeout method will be called on this Http2Session.

      All other interactions will be routed directly to the socket.

    • readonly state: SessionState

      Provides miscellaneous information about the current state of theHttp2Session.

      An object describing the current status of this Http2Session.

    • readonly type: number

      The http2session.type will be equal to http2.constants.NGHTTP2_SESSION_SERVER if this Http2Session instance is a server, and http2.constants.NGHTTP2_SESSION_CLIENT if the instance is a client.

    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'close',
      listener: () => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: 'error',
      listener: (err: Error) => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: 'frameError',
      listener: (frameType: number, errorCode: number, streamID: number) => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: 'goaway',
      listener: (errorCode: number, lastStreamID: number, opaqueData?: Buffer<ArrayBufferLike>) => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: 'localSettings',
      listener: (settings: Settings) => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: 'ping',
      listener: () => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: 'remoteSettings',
      listener: (settings: Settings) => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: 'timeout',
      listener: () => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.on(eventName, listener).

    • callback?: () => void
      ): void;

      Gracefully closes the Http2Session, allowing any existing streams to complete on their own and preventing new Http2Stream instances from being created. Once closed, http2session.destroy()might be called if there are no open Http2Stream instances.

      If specified, the callback function is registered as a handler for the'close' event.

    • error?: Error,
      code?: number
      ): void;

      Immediately terminates the Http2Session and the associated net.Socket or tls.TLSSocket.

      Once destroyed, the Http2Session will emit the 'close' event. If error is not undefined, an 'error' event will be emitted immediately before the 'close' event.

      If there are any remaining open Http2Streams associated with the Http2Session, those will also be destroyed.

      @param error

      An Error object if the Http2Session is being destroyed due to an error.

      @param code

      The HTTP/2 error code to send in the final GOAWAY frame. If unspecified, and error is not undefined, the default is INTERNAL_ERROR, otherwise defaults to NO_ERROR.

    • event: 'close'
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'error',
      err: Error
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'frameError',
      frameType: number,
      errorCode: number,
      streamID: number
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'goaway',
      errorCode: number,
      lastStreamID: number,
      opaqueData?: Buffer<ArrayBufferLike>
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'localSettings',
      settings: Settings
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'ping'
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'remoteSettings',
      settings: Settings
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'timeout'
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: string | symbol,
      ...args: any[]
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • code?: number,
      lastStreamID?: number,
      opaqueData?: ArrayBufferView<ArrayBufferLike>
      ): void;

      Transmits a GOAWAY frame to the connected peer without shutting down theHttp2Session.

      @param code

      An HTTP/2 error code

      @param lastStreamID

      The numeric ID of the last processed Http2Stream

      @param opaqueData

      A TypedArray or DataView instance containing additional data to be carried within the GOAWAY frame.

    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'error',
      listener: (err: Error) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'frameError',
      listener: (frameType: number, errorCode: number, streamID: number) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'goaway',
      listener: (errorCode: number, lastStreamID: number, opaqueData?: Buffer<ArrayBufferLike>) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'localSettings',
      listener: (settings: Settings) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'ping',
      listener: () => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'remoteSettings',
      listener: (settings: Settings) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'timeout',
      listener: () => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'error',
      listener: (err: Error) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'frameError',
      listener: (frameType: number, errorCode: number, streamID: number) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'goaway',
      listener: (errorCode: number, lastStreamID: number, opaqueData?: Buffer<ArrayBufferLike>) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'localSettings',
      listener: (settings: Settings) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'ping',
      listener: () => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'remoteSettings',
      listener: (settings: Settings) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'timeout',
      listener: () => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

    • callback: (err: null | Error, duration: number, payload: Buffer) => void
      ): boolean;

      Sends a PING frame to the connected HTTP/2 peer. A callback function must be provided. The method will return true if the PING was sent, false otherwise.

      The maximum number of outstanding (unacknowledged) pings is determined by the maxOutstandingPings configuration option. The default maximum is 10.

      If provided, the payload must be a Buffer, TypedArray, or DataView containing 8 bytes of data that will be transmitted with the PING and returned with the ping acknowledgment.

      The callback will be invoked with three arguments: an error argument that will be null if the PING was successfully acknowledged, a duration argument that reports the number of milliseconds elapsed since the ping was sent and the acknowledgment was received, and a Buffer containing the 8-byte PING payload.

      session.ping(Buffer.from('abcdefgh'), (err, duration, payload) => {
        if (!err) {
          console.log(`Ping acknowledged in ${duration} milliseconds`);
          console.log(`With payload '${payload.toString()}'`);
        }
      });
      

      If the payload argument is not specified, the default payload will be the 64-bit timestamp (little endian) marking the start of the PING duration.

      payload: ArrayBufferView,
      callback: (err: null | Error, duration: number, payload: Buffer) => void
      ): boolean;
    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'error',
      listener: (err: Error) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'frameError',
      listener: (frameType: number, errorCode: number, streamID: number) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'goaway',
      listener: (errorCode: number, lastStreamID: number, opaqueData?: Buffer<ArrayBufferLike>) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'localSettings',
      listener: (settings: Settings) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'ping',
      listener: () => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'remoteSettings',
      listener: (settings: Settings) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'timeout',
      listener: () => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'error',
      listener: (err: Error) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'frameError',
      listener: (frameType: number, errorCode: number, streamID: number) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'goaway',
      listener: (errorCode: number, lastStreamID: number, opaqueData?: Buffer<ArrayBufferLike>) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'localSettings',
      listener: (settings: Settings) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'ping',
      listener: () => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'remoteSettings',
      listener: (settings: Settings) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'timeout',
      listener: () => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • ref(): void;

      Calls ref() on this Http2Session instance's underlying net.Socket.

    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

    • windowSize: number
      ): void;

      Sets the local endpoint's window size. The windowSize is the total window size to set, not the delta.

      import http2 from 'node:http2';
      
      const server = http2.createServer();
      const expectedWindowSize = 2 ** 20;
      server.on('connect', (session) => {
      
        // Set local window size to be 2 ** 20
        session.setLocalWindowSize(expectedWindowSize);
      });
      
    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • msecs: number,
      callback?: () => void
      ): void;

      Used to set a callback function that is called when there is no activity on the Http2Session after msecs milliseconds. The given callback is registered as a listener on the 'timeout' event.

    • settings: Settings,
      callback?: (err: null | Error, settings: Settings, duration: number) => void
      ): void;

      Updates the current local settings for this Http2Session and sends a new SETTINGS frame to the connected HTTP/2 peer.

      Once called, the http2session.pendingSettingsAck property will be true while the session is waiting for the remote peer to acknowledge the new settings.

      The new settings will not become effective until the SETTINGS acknowledgment is received and the 'localSettings' event is emitted. It is possible to send multiple SETTINGS frames while acknowledgment is still pending.

      @param callback

      Callback that is called once the session is connected or right away if the session is already connected.

    • unref(): void;

      Calls unref() on this Http2Sessioninstance's underlying net.Socket.

  • interface Http2Stream

    Duplex streams are streams that implement both the Readable and Writable interfaces.

    Examples of Duplex streams include:

    • TCP sockets
    • zlib streams
    • crypto streams
    • readonly aborted: boolean

      Set to true if the Http2Stream instance was aborted abnormally. When set, the 'aborted' event will have been emitted.

    • allowHalfOpen: boolean

      If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

      This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

    • readonly bufferSize: number

      This property shows the number of characters currently buffered to be written. See net.Socket.bufferSize for details.

    • readonly closed: boolean

      Set to true if the Http2Stream instance has been closed.

    • readonly destroyed: boolean

      Set to true if the Http2Stream instance has been destroyed and is no longer usable.

    • readonly endAfterHeaders: boolean

      Set to true if the END_STREAM flag was set in the request or response HEADERS frame received, indicating that no additional data should be received and the readable side of the Http2Stream will be closed.

    • readonly errored: null | Error

      Returns error if the stream has been destroyed with an error.

    • readonly id?: number

      The numeric stream identifier of this Http2Stream instance. Set to undefined if the stream identifier has not yet been assigned.

    • readonly pending: boolean

      Set to true if the Http2Stream instance has not yet been assigned a numeric stream identifier.

    • readable: boolean

      Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

    • readonly readableAborted: boolean

      Returns whether the stream was destroyed or errored before emitting 'end'.

    • readonly readableDidRead: boolean

      Returns whether 'data' has been emitted.

    • readonly readableEncoding: null | BufferEncoding

      Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

    • readonly readableEnded: boolean

      Becomes true when 'end' event is emitted.

    • readonly readableFlowing: null | boolean

      This property reflects the current state of a Readable stream as described in the Three states section.

    • readonly readableHighWaterMark: number

      Returns the value of highWaterMark passed when creating this Readable.

    • readonly readableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

    • readonly readableObjectMode: boolean

      Getter for the property objectMode of a given Readable stream.

    • readonly rstCode: number

      Set to the RST_STREAM error code reported when the Http2Stream is destroyed after either receiving an RST_STREAM frame from the connected peer, calling http2stream.close(), or http2stream.destroy(). Will be undefined if the Http2Stream has not been closed.

    • readonly sentHeaders: OutgoingHttpHeaders

      An object containing the outbound headers sent for this Http2Stream.

    • readonly sentInfoHeaders?: OutgoingHttpHeaders[]

      An array of objects containing the outbound informational (additional) headers sent for this Http2Stream.

    • readonly sentTrailers?: OutgoingHttpHeaders

      An object containing the outbound trailers sent for this HttpStream.

    • readonly session: undefined | Http2Session

      A reference to the Http2Session instance that owns this Http2Stream. The value will be undefined after the Http2Stream instance is destroyed.

    • readonly state: StreamState

      Provides miscellaneous information about the current state of the Http2Stream.

      A current state of this Http2Stream.

    • readonly writable: boolean

      Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

    • readonly writableAborted: boolean

      Returns whether the stream was destroyed or errored before emitting 'finish'.

    • readonly writableCorked: number

      Number of times writable.uncork() needs to be called in order to fully uncork the stream.

    • readonly writableEnded: boolean

      Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

    • readonly writableFinished: boolean

      Is set to true immediately before the 'finish' event is emitted.

    • readonly writableHighWaterMark: number

      Return the value of highWaterMark passed when creating this Writable.

    • readonly writableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

    • readonly writableNeedDrain: boolean

      Is true if the stream's buffer has been full and stream will emit 'drain'.

    • readonly writableObjectMode: boolean

      Getter for the property objectMode of a given Writable stream.

    • callback: (error?: null | Error) => void
      ): void;
    • error: null | Error,
      callback: (error?: null | Error) => void
      ): void;
    • callback: (error?: null | Error) => void
      ): void;
    • size: number
      ): void;
    • chunk: any,
      encoding: BufferEncoding,
      callback: (error?: null | Error) => void
      ): void;
    • chunks: { chunk: any; encoding: BufferEncoding }[],
      callback: (error?: null | Error) => void
      ): void;
    • [Symbol.asyncDispose](): Promise<void>;

      Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

    • [Symbol.asyncIterator](): AsyncIterator<any>;
    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'aborted',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'close',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'drain',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'end',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'error',
      listener: (err: Error) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'finish',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'frameError',
      listener: (frameType: number, errorCode: number) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'streamClosed',
      listener: (code: number) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'timeout',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'trailers',
      listener: (trailers: IncomingHttpHeaders, flags: number) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'wantTrailers',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
    • options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

      @returns

      a stream of indexed pairs.

    • code?: number,
      callback?: () => void
      ): void;

      Closes the Http2Stream instance by sending an RST_STREAM frame to the connected HTTP/2 peer.

      @param code

      Unsigned 32-bit integer identifying the error code.

      @param callback

      An optional function registered to listen for the 'close' event.

    • compose<T extends ReadableStream>(
      stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
      options?: { signal: AbortSignal }
      ): T;
    • cork(): void;

      The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

      The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

      See also: writable.uncork(), writable._writev().

    • error?: Error
      ): this;

      Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

      Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

      Implementors should not override this method, but instead implement readable._destroy().

      @param error

      Error which will be passed as payload in 'error' event

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks dropped from the start.

      @param limit

      the number of chunks to drop from the readable.

      @returns

      a stream with limit chunks dropped from the start.

    • event: 'aborted'
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'close'
      ): boolean;
      event: 'data',
      chunk: string | Buffer<ArrayBufferLike>
      ): boolean;
      event: 'drain'
      ): boolean;
      event: 'end'
      ): boolean;
      event: 'error',
      err: Error
      ): boolean;
      event: 'finish'
      ): boolean;
      event: 'frameError',
      frameType: number,
      errorCode: number
      ): boolean;
      event: 'pipe',
      ): boolean;
      event: 'unpipe',
      ): boolean;
      event: 'streamClosed',
      code: number
      ): boolean;
      event: 'timeout'
      ): boolean;
      event: 'trailers',
      flags: number
      ): boolean;
      event: 'wantTrailers'
      ): boolean;
      event: string | symbol,
      ...args: any[]
      ): boolean;
    • cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      chunk: any,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      chunk: any,
      encoding: BufferEncoding,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding if chunk is a string

    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for every one of the chunks.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions

      This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

      @param fn

      a function to filter chunks from the stream. Async or not.

      @returns

      a stream filtered with the predicate fn.

    • find<T>(
      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
      options?: ArrayOptions
      ): Promise<undefined | T>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<any>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

      It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

      @param fn

      a function to map over every chunk in the stream. May be async. May be a stream or generator.

      @returns

      a stream flat-mapped with the function fn.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
      options?: ArrayOptions
      ): Promise<void>;

      This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

      This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

      This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise for when the stream has finished.

    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • isPaused(): boolean;

      The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

      const readable = new stream.Readable();
      
      readable.isPaused(); // === false
      readable.pause();
      readable.isPaused(); // === true
      readable.resume();
      readable.isPaused(); // === false
      
    • options?: { destroyOnReturn: boolean }
      ): AsyncIterator<any>;

      The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

      @param fn

      a function to map over every chunk in the stream. Async or not.

      @returns

      a stream mapped with the function fn.

    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'aborted',
      listener: () => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'close',
      listener: () => void
      ): this;
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'frameError',
      listener: (frameType: number, errorCode: number) => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: 'streamClosed',
      listener: (code: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: 'trailers',
      listener: (trailers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'wantTrailers',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'aborted',
      listener: () => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'close',
      listener: () => void
      ): this;
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'frameError',
      listener: (frameType: number, errorCode: number) => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: 'streamClosed',
      listener: (code: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: 'trailers',
      listener: (trailers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'wantTrailers',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • pause(): this;

      The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

      const readable = getReadableStreamSomehow();
      readable.on('data', (chunk) => {
        console.log(`Received ${chunk.length} bytes of data.`);
        readable.pause();
        console.log('There will be no additional data for 1 second.');
        setTimeout(() => {
          console.log('Now data will start flowing again.');
          readable.resume();
        }, 1000);
      });
      

      The readable.pause() method has no effect if there is a 'readable' event listener.

    • pipe<T extends WritableStream>(
      destination: T,
      options?: { end: boolean }
      ): T;
    • event: 'aborted',
      listener: () => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'close',
      listener: () => void
      ): this;
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'frameError',
      listener: (frameType: number, errorCode: number) => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: 'streamClosed',
      listener: (code: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: 'trailers',
      listener: (trailers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'wantTrailers',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'aborted',
      listener: () => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'close',
      listener: () => void
      ): this;
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'frameError',
      listener: (frameType: number, errorCode: number) => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: 'streamClosed',
      listener: (code: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: 'trailers',
      listener: (trailers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'wantTrailers',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • ): void;

      Updates the priority for this Http2Stream instance.

    • chunk: any,
      encoding?: BufferEncoding
      ): boolean;
    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • size?: number
      ): any;

      The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

      The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

      If the size argument is not specified, all of the data contained in the internal buffer will be returned.

      The size argument must be less than or equal to 1 GiB.

      The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

      const readable = getReadableStreamSomehow();
      
      // 'readable' may be triggered multiple times as data is buffered in
      readable.on('readable', () => {
        let chunk;
        console.log('Stream is readable (new data received in buffer)');
        // Use a loop to make sure we read all currently available data
        while (null !== (chunk = readable.read())) {
          console.log(`Read ${chunk.length} bytes of data...`);
        }
      });
      
      // 'end' will be triggered once when there is no more data available
      readable.on('end', () => {
        console.log('Reached end of stream.');
      });
      

      Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

      Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

      const chunks = [];
      
      readable.on('readable', () => {
        let chunk;
        while (null !== (chunk = readable.read())) {
          chunks.push(chunk);
        }
      });
      
      readable.on('end', () => {
        const content = chunks.join('');
      });
      

      A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

      If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

      Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

      @param size

      Optional argument to specify how much data to read.

    • reduce<T = any>(
      fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial?: undefined,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

      reduce<T = any>(
      fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial: T,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • event: 'close',
      listener: () => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • resume(): this;

      The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

      The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

      getReadableStreamSomehow()
        .resume()
        .on('end', () => {
          console.log('Reached the end, but did not read anything.');
        });
      

      The readable.resume() method has no effect if there is a 'readable' event listener.

    • ): void;

      Sends a trailing HEADERS frame to the connected HTTP/2 peer. This method will cause the Http2Stream to be immediately closed and must only be called after the 'wantTrailers' event has been emitted. When sending a request or sending a response, the options.waitForTrailers option must be set in order to keep the Http2Stream open after the final DATA frame so that trailers can be sent.

      import http2 from 'node:http2';
      const server = http2.createServer();
      server.on('stream', (stream) => {
        stream.respond(undefined, { waitForTrailers: true });
        stream.on('wantTrailers', () => {
          stream.sendTrailers({ xyz: 'abc' });
        });
        stream.end('Hello World');
      });
      

      The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-header fields (e.g. ':method', ':path', etc).

    • encoding: BufferEncoding
      ): this;

      The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

      @param encoding

      The new default encoding

    • encoding: BufferEncoding
      ): this;

      The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

      By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

      The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

      const readable = getReadableStreamSomehow();
      readable.setEncoding('utf8');
      readable.on('data', (chunk) => {
        assert.equal(typeof chunk, 'string');
        console.log('Got %d characters of string data:', chunk.length);
      });
      
      @param encoding

      The encoding to use.

    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • msecs: number,
      callback?: () => void
      ): void;
      import http2 from 'node:http2';
      const client = http2.connect('http://example.org:8000');
      const { NGHTTP2_CANCEL } = http2.constants;
      const req = client.request({ ':path': '/' });
      
      // Cancel the stream if there's no activity after 5 seconds
      req.setTimeout(5000, () => req.close(NGHTTP2_CANCEL));
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks.

      @param limit

      the number of chunks to take from the readable.

      @returns

      a stream with limit chunks taken.

    • options?: Pick<ArrayOptions, 'signal'>
      ): Promise<any[]>;

      This method allows easily obtaining the contents of a stream.

      As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

      @returns

      a promise containing an array with the contents of the stream.

    • uncork(): void;

      The writable.uncork() method flushes all data buffered since cork was called.

      When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

      stream.cork();
      stream.write('some ');
      stream.write('data ');
      process.nextTick(() => stream.uncork());
      

      If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

      stream.cork();
      stream.write('some ');
      stream.cork();
      stream.write('data ');
      process.nextTick(() => {
        stream.uncork();
        // The data will not be flushed until uncork() is called a second time.
        stream.uncork();
      });
      

      See also: writable.cork().

    • destination?: WritableStream
      ): this;

      The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

      If the destination is not specified, then all pipes are detached.

      If the destination is specified, but no pipe is set up for it, then the method does nothing.

      import fs from 'node:fs';
      const readable = getReadableStreamSomehow();
      const writable = fs.createWriteStream('file.txt');
      // All the data from readable goes into 'file.txt',
      // but only for the first second.
      readable.pipe(writable);
      setTimeout(() => {
        console.log('Stop writing to file.txt.');
        readable.unpipe(writable);
        console.log('Manually close the file stream.');
        writable.end();
      }, 1000);
      
      @param destination

      Optional specific stream to unpipe

    • chunk: any,
      encoding?: BufferEncoding
      ): void;

      Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

      The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

      The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

      Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

      // Pull off a header delimited by \n\n.
      // Use unshift() if we get too much.
      // Call the callback with (error, header, stream).
      import { StringDecoder } from 'node:string_decoder';
      function parseHeader(stream, callback) {
        stream.on('error', callback);
        stream.on('readable', onReadable);
        const decoder = new StringDecoder('utf8');
        let header = '';
        function onReadable() {
          let chunk;
          while (null !== (chunk = stream.read())) {
            const str = decoder.write(chunk);
            if (str.includes('\n\n')) {
              // Found the header boundary.
              const split = str.split(/\n\n/);
              header += split.shift();
              const remaining = split.join('\n\n');
              const buf = Buffer.from(remaining, 'utf8');
              stream.removeListener('error', callback);
              // Remove the 'readable' listener before unshifting.
              stream.removeListener('readable', onReadable);
              if (buf.length)
                stream.unshift(buf);
              // Now the body of the message can be read from the stream.
              callback(null, header, stream);
              return;
            }
            // Still reading the header.
            header += str;
          }
        }
      }
      

      Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

      @param chunk

      Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

      @param encoding

      Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

    • stream: ReadableStream
      ): this;

      Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

      When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

      It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

      import { OldReader } from './old-api-module.js';
      import { Readable } from 'node:stream';
      const oreader = new OldReader();
      const myReader = new Readable().wrap(oreader);
      
      myReader.on('readable', () => {
        myReader.read(); // etc.
      });
      
      @param stream

      An "old style" readable stream

    • chunk: any,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

      chunk: any,
      encoding: BufferEncoding,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding, if chunk is a string.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

  • interface IncomingHttpHeaders

  • interface SecureClientSessionOptions

    • allowPartialTrustChain?: boolean

      Treat intermediate (non-self-signed) certificates in the trust CA certificate list as trusted.

    • ALPNCallback?: (arg: { protocols: string[]; servername: string }) => undefined | string

      If set, this will be called when a client opens a connection using the ALPN extension. One argument will be passed to the callback: an object containing servername and protocols fields, respectively containing the server name from the SNI extension (if any) and an array of ALPN protocol name strings. The callback must return either one of the strings listed in protocols, which will be returned to the client as the selected ALPN protocol, or undefined, to reject the connection with a fatal alert. If a string is returned that does not match one of the client's ALPN protocols, an error will be thrown. This option cannot be used with the ALPNProtocols option, and setting both options will throw an error.

    • ALPNProtocols?: Uint8Array<ArrayBufferLike> | string[] | Uint8Array<ArrayBufferLike>[]

      An array of strings or a Buffer naming possible ALPN protocols. (Protocols should be ordered by their priority.)

    • ca?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]

      Optionally override the trusted CA certificates. Default is to trust the well-known CAs curated by Mozilla. Mozilla's CAs are completely replaced when CAs are explicitly specified using this option.

    • cert?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]

      Cert chains in PEM format. One cert chain should be provided per private key. Each cert chain should consist of the PEM formatted certificate for a provided private key, followed by the PEM formatted intermediate certificates (if any), in order, and not including the root CA (the root CA must be pre-known to the peer, see ca). When providing multiple cert chains, they do not have to be in the same order as their private keys in key. If the intermediate certificates are not provided, the peer will not be able to validate the certificate, and the handshake will fail.

    • checkServerIdentity?: (hostname: string, cert: PeerCertificate) => undefined | Error
    • ciphers?: string

      Cipher suite specification, replacing the default. For more information, see modifying the default cipher suite. Permitted ciphers can be obtained via tls.getCiphers(). Cipher names must be uppercased in order for OpenSSL to accept them.

    • createConnection?: (authority: URL, option: SessionOptions) => Duplex
    • crl?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]

      PEM formatted CRLs (Certificate Revocation Lists).

    • dhparam?: string | Buffer<ArrayBufferLike>

      'auto' or custom Diffie-Hellman parameters, required for non-ECDHE perfect forward secrecy. If omitted or invalid, the parameters are silently discarded and DHE ciphers will not be available. ECDHE-based perfect forward secrecy will still be available.

    • ecdhCurve?: string

      A string describing a named curve or a colon separated list of curve NIDs or names, for example P-521:P-384:P-256, to use for ECDH key agreement. Set to auto to select the curve automatically. Use crypto.getCurves() to obtain a list of available curve names. On recent releases, openssl ecparam -list_curves will also display the name and description of each available elliptic curve. Default: tls.DEFAULT_ECDH_CURVE.

    • enableTrace?: boolean

      When enabled, TLS packet trace information is written to stderr. This can be used to debug TLS connection problems.

    • honorCipherOrder?: boolean

      Attempt to use the server's cipher suite preferences instead of the client's. When true, causes SSL_OP_CIPHER_SERVER_PREFERENCE to be set in secureOptions

    • host?: string
    • key?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | KeyObject[]

      Private keys in PEM format. PEM allows the option of private keys being encrypted. Encrypted keys will be decrypted with options.passphrase. Multiple keys using different algorithms can be provided either as an array of unencrypted key strings or buffers, or an array of objects in the form {pem: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted keys will be decrypted with object.passphrase if provided, or options.passphrase if it is not.

    • maxVersion?: SecureVersion

      Optionally set the maximum TLS version to allow. One of 'TLSv1.3', 'TLSv1.2', 'TLSv1.1', or 'TLSv1'. Cannot be specified along with the secureProtocol option, use one or the other. Default: 'TLSv1.3', unless changed using CLI options. Using --tls-max-v1.2 sets the default to 'TLSv1.2'. Using --tls-max-v1.3 sets the default to 'TLSv1.3'. If multiple of the options are provided, the highest maximum is used.

    • minDHSize?: number
    • minVersion?: SecureVersion

      Optionally set the minimum TLS version to allow. One of 'TLSv1.3', 'TLSv1.2', 'TLSv1.1', or 'TLSv1'. Cannot be specified along with the secureProtocol option, use one or the other. It is not recommended to use less than TLSv1.2, but it may be required for interoperability. Default: 'TLSv1.2', unless changed using CLI options. Using --tls-v1.0 sets the default to 'TLSv1'. Using --tls-v1.1 sets the default to 'TLSv1.1'. Using --tls-min-v1.3 sets the default to 'TLSv1.3'. If multiple of the options are provided, the lowest minimum is used.

    • passphrase?: string

      Shared passphrase used for a single private key and/or a PFX.

    • path?: string
    • pfx?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | PxfObject[]

      PFX or PKCS12 encoded private key and certificate chain. pfx is an alternative to providing key and cert individually. PFX is usually encrypted, if it is, passphrase will be used to decrypt it. Multiple PFX can be provided either as an array of unencrypted PFX buffers, or an array of objects in the form {buf: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted PFX will be decrypted with object.passphrase if provided, or options.passphrase if it is not.

    • port?: number
    • protocol?: 'http:' | 'https:'
    • rejectUnauthorized?: boolean

      If true the server will reject any connection which is not authorized with the list of supplied CAs. This option only has an effect if requestCert is true.

    • requestCert?: boolean

      If true the server will request a certificate from clients that connect and attempt to verify that certificate. Defaults to false.

    • secureContext?: SecureContext

      An optional TLS context object from tls.createSecureContext()

    • secureOptions?: number

      Optionally affect the OpenSSL protocol behavior, which is not usually necessary. This should be used carefully if at all! Value is a numeric bitmask of the SSL_OP_* options from OpenSSL Options

    • secureProtocol?: string

      Legacy mechanism to select the TLS protocol version to use, it does not support independent control of the minimum and maximum version, and does not support limiting the protocol to TLSv1.3. Use minVersion and maxVersion instead. The possible values are listed as SSL_METHODS, use the function names as strings. For example, use 'TLSv1_1_method' to force TLS version 1.1, or 'TLS_method' to allow any TLS protocol version up to TLSv1.3. It is not recommended to use TLS versions less than 1.2, but it may be required for interoperability. Default: none, see minVersion.

    • servername?: string
    • session?: Buffer<ArrayBufferLike>
    • sessionIdContext?: string

      Opaque identifier used by servers to ensure session state is not shared between applications. Unused by clients.

    • sessionTimeout?: number

      The number of seconds after which a TLS session created by the server will no longer be resumable. See Session Resumption for more information. Default: 300.

    • sigalgs?: string

      Colon-separated list of supported signature algorithms. The list can contain digest algorithms (SHA256, MD5 etc.), public key algorithms (RSA-PSS, ECDSA etc.), combination of both (e.g 'RSA+SHA384') or TLS v1.3 scheme names (e.g. rsa_pss_pss_sha512).

    • SNICallback?: (servername: string, cb: (err: null | Error, ctx?: SecureContext) => void) => void

      SNICallback(servername, cb) <Function> A function that will be called if the client supports SNI TLS extension. Two arguments will be passed when called: servername and cb. SNICallback should invoke cb(null, ctx), where ctx is a SecureContext instance. (tls.createSecureContext(...) can be used to get a proper SecureContext.) If SNICallback wasn't provided the default callback with high-level API will be used (see below).

    • ticketKeys?: Buffer<ArrayBufferLike>

      48-bytes of cryptographically strong pseudo-random data. See Session Resumption for more information.

    • timeout?: number
    • unknownProtocolTimeout?: number

      Specifies a timeout in milliseconds that a server should wait when an ['unknownProtocol'][] is emitted. If the socket has not been destroyed by that time the server will destroy it.

    • hint: null | string

      When negotiating TLS-PSK (pre-shared keys), this function is called with optional identity hint provided by the server or null in case of TLS 1.3 where hint was removed. It will be necessary to provide a custom tls.checkServerIdentity() for the connection as the default one will try to check hostname/IP of the server against the certificate but that's not applicable for PSK because there won't be a certificate present. More information can be found in the RFC 4279.

      @param hint

      message sent from the server to help client decide which identity to use during negotiation. Always null if TLS 1.3 is used.

      @returns

      Return null to stop the negotiation process. psk must be compatible with the selected cipher's digest. identity must use UTF-8 encoding.

    • frameLen: number,
      maxFrameLen: number
      ): number;
  • interface SecureServerOptions<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>

    • allowHalfOpen?: boolean

      Indicates whether half-opened TCP connections are allowed.

    • allowHTTP1?: boolean
    • allowPartialTrustChain?: boolean

      Treat intermediate (non-self-signed) certificates in the trust CA certificate list as trusted.

    • ALPNCallback?: (arg: { protocols: string[]; servername: string }) => undefined | string

      If set, this will be called when a client opens a connection using the ALPN extension. One argument will be passed to the callback: an object containing servername and protocols fields, respectively containing the server name from the SNI extension (if any) and an array of ALPN protocol name strings. The callback must return either one of the strings listed in protocols, which will be returned to the client as the selected ALPN protocol, or undefined, to reject the connection with a fatal alert. If a string is returned that does not match one of the client's ALPN protocols, an error will be thrown. This option cannot be used with the ALPNProtocols option, and setting both options will throw an error.

    • ALPNProtocols?: Uint8Array<ArrayBufferLike> | string[] | Uint8Array<ArrayBufferLike>[]

      An array of strings or a Buffer naming possible ALPN protocols. (Protocols should be ordered by their priority.)

    • blockList?: BlockList

      blockList can be used for disabling inbound access to specific IP addresses, IP ranges, or IP subnets. This does not work if the server is behind a reverse proxy, NAT, etc. because the address checked against the block list is the address of the proxy, or the one specified by the NAT.

    • ca?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]

      Optionally override the trusted CA certificates. Default is to trust the well-known CAs curated by Mozilla. Mozilla's CAs are completely replaced when CAs are explicitly specified using this option.

    • cert?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]

      Cert chains in PEM format. One cert chain should be provided per private key. Each cert chain should consist of the PEM formatted certificate for a provided private key, followed by the PEM formatted intermediate certificates (if any), in order, and not including the root CA (the root CA must be pre-known to the peer, see ca). When providing multiple cert chains, they do not have to be in the same order as their private keys in key. If the intermediate certificates are not provided, the peer will not be able to validate the certificate, and the handshake will fail.

    • ciphers?: string

      Cipher suite specification, replacing the default. For more information, see modifying the default cipher suite. Permitted ciphers can be obtained via tls.getCiphers(). Cipher names must be uppercased in order for OpenSSL to accept them.

    • crl?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]

      PEM formatted CRLs (Certificate Revocation Lists).

    • dhparam?: string | Buffer<ArrayBufferLike>

      'auto' or custom Diffie-Hellman parameters, required for non-ECDHE perfect forward secrecy. If omitted or invalid, the parameters are silently discarded and DHE ciphers will not be available. ECDHE-based perfect forward secrecy will still be available.

    • ecdhCurve?: string

      A string describing a named curve or a colon separated list of curve NIDs or names, for example P-521:P-384:P-256, to use for ECDH key agreement. Set to auto to select the curve automatically. Use crypto.getCurves() to obtain a list of available curve names. On recent releases, openssl ecparam -list_curves will also display the name and description of each available elliptic curve. Default: tls.DEFAULT_ECDH_CURVE.

    • enableTrace?: boolean

      When enabled, TLS packet trace information is written to stderr. This can be used to debug TLS connection problems.

    • handshakeTimeout?: number

      Abort the connection if the SSL/TLS handshake does not finish in the specified number of milliseconds. A 'tlsClientError' is emitted on the tls.Server object whenever a handshake times out. Default: 120000 (120 seconds).

    • highWaterMark?: number

      Optionally overrides all net.Sockets' readableHighWaterMark and writableHighWaterMark.

    • honorCipherOrder?: boolean

      Attempt to use the server's cipher suite preferences instead of the client's. When true, causes SSL_OP_CIPHER_SERVER_PREFERENCE to be set in secureOptions

    • Http1IncomingMessage?: Http1Request
    • Http1ServerResponse?: Http1Response
    • Http2ServerRequest?: Http2Request
    • Http2ServerResponse?: Http2Response
    • keepAlive?: boolean

      If set to true, it enables keep-alive functionality on the socket immediately after a new incoming connection is received, similarly on what is done in socket.setKeepAlive([enable][, initialDelay]).

    • keepAliveInitialDelay?: number

      If set to a positive number, it sets the initial delay before the first keepalive probe is sent on an idle socket.

    • key?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | KeyObject[]

      Private keys in PEM format. PEM allows the option of private keys being encrypted. Encrypted keys will be decrypted with options.passphrase. Multiple keys using different algorithms can be provided either as an array of unencrypted key strings or buffers, or an array of objects in the form {pem: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted keys will be decrypted with object.passphrase if provided, or options.passphrase if it is not.

    • maxVersion?: SecureVersion

      Optionally set the maximum TLS version to allow. One of 'TLSv1.3', 'TLSv1.2', 'TLSv1.1', or 'TLSv1'. Cannot be specified along with the secureProtocol option, use one or the other. Default: 'TLSv1.3', unless changed using CLI options. Using --tls-max-v1.2 sets the default to 'TLSv1.2'. Using --tls-max-v1.3 sets the default to 'TLSv1.3'. If multiple of the options are provided, the highest maximum is used.

    • minVersion?: SecureVersion

      Optionally set the minimum TLS version to allow. One of 'TLSv1.3', 'TLSv1.2', 'TLSv1.1', or 'TLSv1'. Cannot be specified along with the secureProtocol option, use one or the other. It is not recommended to use less than TLSv1.2, but it may be required for interoperability. Default: 'TLSv1.2', unless changed using CLI options. Using --tls-v1.0 sets the default to 'TLSv1'. Using --tls-v1.1 sets the default to 'TLSv1.1'. Using --tls-min-v1.3 sets the default to 'TLSv1.3'. If multiple of the options are provided, the lowest minimum is used.

    • noDelay?: boolean

      If set to true, it disables the use of Nagle's algorithm immediately after a new incoming connection is received.

    • origins?: string[]
    • passphrase?: string

      Shared passphrase used for a single private key and/or a PFX.

    • pauseOnConnect?: boolean

      Indicates whether the socket should be paused on incoming connections.

    • pfx?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | PxfObject[]

      PFX or PKCS12 encoded private key and certificate chain. pfx is an alternative to providing key and cert individually. PFX is usually encrypted, if it is, passphrase will be used to decrypt it. Multiple PFX can be provided either as an array of unencrypted PFX buffers, or an array of objects in the form {buf: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted PFX will be decrypted with object.passphrase if provided, or options.passphrase if it is not.

    • pskIdentityHint?: string

      hint to send to a client to help with selecting the identity during TLS-PSK negotiation. Will be ignored in TLS 1.3. Upon failing to set pskIdentityHint tlsClientError will be emitted with ERR_TLS_PSK_SET_IDENTIY_HINT_FAILED code.

    • rejectUnauthorized?: boolean

      If true the server will reject any connection which is not authorized with the list of supplied CAs. This option only has an effect if requestCert is true.

    • requestCert?: boolean

      If true the server will request a certificate from clients that connect and attempt to verify that certificate. Defaults to false.

    • secureContext?: SecureContext

      An optional TLS context object from tls.createSecureContext()

    • secureOptions?: number

      Optionally affect the OpenSSL protocol behavior, which is not usually necessary. This should be used carefully if at all! Value is a numeric bitmask of the SSL_OP_* options from OpenSSL Options

    • secureProtocol?: string

      Legacy mechanism to select the TLS protocol version to use, it does not support independent control of the minimum and maximum version, and does not support limiting the protocol to TLSv1.3. Use minVersion and maxVersion instead. The possible values are listed as SSL_METHODS, use the function names as strings. For example, use 'TLSv1_1_method' to force TLS version 1.1, or 'TLS_method' to allow any TLS protocol version up to TLSv1.3. It is not recommended to use TLS versions less than 1.2, but it may be required for interoperability. Default: none, see minVersion.

    • sessionIdContext?: string

      Opaque identifier used by servers to ensure session state is not shared between applications. Unused by clients.

    • sessionTimeout?: number

      The number of seconds after which a TLS session created by the server will no longer be resumable. See Session Resumption for more information. Default: 300.

    • sigalgs?: string

      Colon-separated list of supported signature algorithms. The list can contain digest algorithms (SHA256, MD5 etc.), public key algorithms (RSA-PSS, ECDSA etc.), combination of both (e.g 'RSA+SHA384') or TLS v1.3 scheme names (e.g. rsa_pss_pss_sha512).

    • SNICallback?: (servername: string, cb: (err: null | Error, ctx?: SecureContext) => void) => void

      SNICallback(servername, cb) <Function> A function that will be called if the client supports SNI TLS extension. Two arguments will be passed when called: servername and cb. SNICallback should invoke cb(null, ctx), where ctx is a SecureContext instance. (tls.createSecureContext(...) can be used to get a proper SecureContext.) If SNICallback wasn't provided the default callback with high-level API will be used (see below).

    • ticketKeys?: Buffer<ArrayBufferLike>

      48-bytes of cryptographically strong pseudo-random data.

    • unknownProtocolTimeout?: number

      Specifies a timeout in milliseconds that a server should wait when an ['unknownProtocol'][] is emitted. If the socket has not been destroyed by that time the server will destroy it.

    • socket: TLSSocket,
      identity: string
      ): null | TypedArray<ArrayBufferLike> | DataView<ArrayBufferLike>;
      @param identity

      identity parameter sent from the client.

      @returns

      pre-shared key that must either be a buffer or null to stop the negotiation process. Returned PSK must be compatible with the selected cipher's digest.

      When negotiating TLS-PSK (pre-shared keys), this function is called with the identity provided by the client. If the return value is null the negotiation process will stop and an "unknown_psk_identity" alert message will be sent to the other party. If the server wishes to hide the fact that the PSK identity was not known, the callback must provide some random data as psk to make the connection fail with "decrypt_error" before negotiation is finished. PSK ciphers are disabled by default, and using TLS-PSK thus requires explicitly specifying a cipher suite with the ciphers option. More information can be found in the RFC 4279.

    • frameLen: number,
      maxFrameLen: number
      ): number;
  • interface SecureServerSessionOptions<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>

    • allowHalfOpen?: boolean

      Indicates whether half-opened TCP connections are allowed.

    • allowPartialTrustChain?: boolean

      Treat intermediate (non-self-signed) certificates in the trust CA certificate list as trusted.

    • ALPNCallback?: (arg: { protocols: string[]; servername: string }) => undefined | string

      If set, this will be called when a client opens a connection using the ALPN extension. One argument will be passed to the callback: an object containing servername and protocols fields, respectively containing the server name from the SNI extension (if any) and an array of ALPN protocol name strings. The callback must return either one of the strings listed in protocols, which will be returned to the client as the selected ALPN protocol, or undefined, to reject the connection with a fatal alert. If a string is returned that does not match one of the client's ALPN protocols, an error will be thrown. This option cannot be used with the ALPNProtocols option, and setting both options will throw an error.

    • ALPNProtocols?: Uint8Array<ArrayBufferLike> | string[] | Uint8Array<ArrayBufferLike>[]

      An array of strings or a Buffer naming possible ALPN protocols. (Protocols should be ordered by their priority.)

    • blockList?: BlockList

      blockList can be used for disabling inbound access to specific IP addresses, IP ranges, or IP subnets. This does not work if the server is behind a reverse proxy, NAT, etc. because the address checked against the block list is the address of the proxy, or the one specified by the NAT.

    • ca?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]

      Optionally override the trusted CA certificates. Default is to trust the well-known CAs curated by Mozilla. Mozilla's CAs are completely replaced when CAs are explicitly specified using this option.

    • cert?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]

      Cert chains in PEM format. One cert chain should be provided per private key. Each cert chain should consist of the PEM formatted certificate for a provided private key, followed by the PEM formatted intermediate certificates (if any), in order, and not including the root CA (the root CA must be pre-known to the peer, see ca). When providing multiple cert chains, they do not have to be in the same order as their private keys in key. If the intermediate certificates are not provided, the peer will not be able to validate the certificate, and the handshake will fail.

    • ciphers?: string

      Cipher suite specification, replacing the default. For more information, see modifying the default cipher suite. Permitted ciphers can be obtained via tls.getCiphers(). Cipher names must be uppercased in order for OpenSSL to accept them.

    • crl?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike>[]

      PEM formatted CRLs (Certificate Revocation Lists).

    • dhparam?: string | Buffer<ArrayBufferLike>

      'auto' or custom Diffie-Hellman parameters, required for non-ECDHE perfect forward secrecy. If omitted or invalid, the parameters are silently discarded and DHE ciphers will not be available. ECDHE-based perfect forward secrecy will still be available.

    • ecdhCurve?: string

      A string describing a named curve or a colon separated list of curve NIDs or names, for example P-521:P-384:P-256, to use for ECDH key agreement. Set to auto to select the curve automatically. Use crypto.getCurves() to obtain a list of available curve names. On recent releases, openssl ecparam -list_curves will also display the name and description of each available elliptic curve. Default: tls.DEFAULT_ECDH_CURVE.

    • enableTrace?: boolean

      When enabled, TLS packet trace information is written to stderr. This can be used to debug TLS connection problems.

    • handshakeTimeout?: number

      Abort the connection if the SSL/TLS handshake does not finish in the specified number of milliseconds. A 'tlsClientError' is emitted on the tls.Server object whenever a handshake times out. Default: 120000 (120 seconds).

    • highWaterMark?: number

      Optionally overrides all net.Sockets' readableHighWaterMark and writableHighWaterMark.

    • honorCipherOrder?: boolean

      Attempt to use the server's cipher suite preferences instead of the client's. When true, causes SSL_OP_CIPHER_SERVER_PREFERENCE to be set in secureOptions

    • Http1IncomingMessage?: Http1Request
    • Http1ServerResponse?: Http1Response
    • Http2ServerRequest?: Http2Request
    • Http2ServerResponse?: Http2Response
    • keepAlive?: boolean

      If set to true, it enables keep-alive functionality on the socket immediately after a new incoming connection is received, similarly on what is done in socket.setKeepAlive([enable][, initialDelay]).

    • keepAliveInitialDelay?: number

      If set to a positive number, it sets the initial delay before the first keepalive probe is sent on an idle socket.

    • key?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | KeyObject[]

      Private keys in PEM format. PEM allows the option of private keys being encrypted. Encrypted keys will be decrypted with options.passphrase. Multiple keys using different algorithms can be provided either as an array of unencrypted key strings or buffers, or an array of objects in the form {pem: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted keys will be decrypted with object.passphrase if provided, or options.passphrase if it is not.

    • maxVersion?: SecureVersion

      Optionally set the maximum TLS version to allow. One of 'TLSv1.3', 'TLSv1.2', 'TLSv1.1', or 'TLSv1'. Cannot be specified along with the secureProtocol option, use one or the other. Default: 'TLSv1.3', unless changed using CLI options. Using --tls-max-v1.2 sets the default to 'TLSv1.2'. Using --tls-max-v1.3 sets the default to 'TLSv1.3'. If multiple of the options are provided, the highest maximum is used.

    • minVersion?: SecureVersion

      Optionally set the minimum TLS version to allow. One of 'TLSv1.3', 'TLSv1.2', 'TLSv1.1', or 'TLSv1'. Cannot be specified along with the secureProtocol option, use one or the other. It is not recommended to use less than TLSv1.2, but it may be required for interoperability. Default: 'TLSv1.2', unless changed using CLI options. Using --tls-v1.0 sets the default to 'TLSv1'. Using --tls-v1.1 sets the default to 'TLSv1.1'. Using --tls-min-v1.3 sets the default to 'TLSv1.3'. If multiple of the options are provided, the lowest minimum is used.

    • noDelay?: boolean

      If set to true, it disables the use of Nagle's algorithm immediately after a new incoming connection is received.

    • passphrase?: string

      Shared passphrase used for a single private key and/or a PFX.

    • pauseOnConnect?: boolean

      Indicates whether the socket should be paused on incoming connections.

    • pfx?: string | Buffer<ArrayBufferLike> | string | Buffer<ArrayBufferLike> | PxfObject[]

      PFX or PKCS12 encoded private key and certificate chain. pfx is an alternative to providing key and cert individually. PFX is usually encrypted, if it is, passphrase will be used to decrypt it. Multiple PFX can be provided either as an array of unencrypted PFX buffers, or an array of objects in the form {buf: <string|buffer>[, passphrase: <string>]}. The object form can only occur in an array. object.passphrase is optional. Encrypted PFX will be decrypted with object.passphrase if provided, or options.passphrase if it is not.

    • pskIdentityHint?: string

      hint to send to a client to help with selecting the identity during TLS-PSK negotiation. Will be ignored in TLS 1.3. Upon failing to set pskIdentityHint tlsClientError will be emitted with ERR_TLS_PSK_SET_IDENTIY_HINT_FAILED code.

    • rejectUnauthorized?: boolean

      If true the server will reject any connection which is not authorized with the list of supplied CAs. This option only has an effect if requestCert is true.

    • requestCert?: boolean

      If true the server will request a certificate from clients that connect and attempt to verify that certificate. Defaults to false.

    • secureContext?: SecureContext

      An optional TLS context object from tls.createSecureContext()

    • secureOptions?: number

      Optionally affect the OpenSSL protocol behavior, which is not usually necessary. This should be used carefully if at all! Value is a numeric bitmask of the SSL_OP_* options from OpenSSL Options

    • secureProtocol?: string

      Legacy mechanism to select the TLS protocol version to use, it does not support independent control of the minimum and maximum version, and does not support limiting the protocol to TLSv1.3. Use minVersion and maxVersion instead. The possible values are listed as SSL_METHODS, use the function names as strings. For example, use 'TLSv1_1_method' to force TLS version 1.1, or 'TLS_method' to allow any TLS protocol version up to TLSv1.3. It is not recommended to use TLS versions less than 1.2, but it may be required for interoperability. Default: none, see minVersion.

    • sessionIdContext?: string

      Opaque identifier used by servers to ensure session state is not shared between applications. Unused by clients.

    • sessionTimeout?: number

      The number of seconds after which a TLS session created by the server will no longer be resumable. See Session Resumption for more information. Default: 300.

    • sigalgs?: string

      Colon-separated list of supported signature algorithms. The list can contain digest algorithms (SHA256, MD5 etc.), public key algorithms (RSA-PSS, ECDSA etc.), combination of both (e.g 'RSA+SHA384') or TLS v1.3 scheme names (e.g. rsa_pss_pss_sha512).

    • SNICallback?: (servername: string, cb: (err: null | Error, ctx?: SecureContext) => void) => void

      SNICallback(servername, cb) <Function> A function that will be called if the client supports SNI TLS extension. Two arguments will be passed when called: servername and cb. SNICallback should invoke cb(null, ctx), where ctx is a SecureContext instance. (tls.createSecureContext(...) can be used to get a proper SecureContext.) If SNICallback wasn't provided the default callback with high-level API will be used (see below).

    • ticketKeys?: Buffer<ArrayBufferLike>

      48-bytes of cryptographically strong pseudo-random data.

    • unknownProtocolTimeout?: number

      Specifies a timeout in milliseconds that a server should wait when an ['unknownProtocol'][] is emitted. If the socket has not been destroyed by that time the server will destroy it.

    • socket: TLSSocket,
      identity: string
      ): null | TypedArray<ArrayBufferLike> | DataView<ArrayBufferLike>;
      @param identity

      identity parameter sent from the client.

      @returns

      pre-shared key that must either be a buffer or null to stop the negotiation process. Returned PSK must be compatible with the selected cipher's digest.

      When negotiating TLS-PSK (pre-shared keys), this function is called with the identity provided by the client. If the return value is null the negotiation process will stop and an "unknown_psk_identity" alert message will be sent to the other party. If the server wishes to hide the fact that the PSK identity was not known, the callback must provide some random data as psk to make the connection fail with "decrypt_error" before negotiation is finished. PSK ciphers are disabled by default, and using TLS-PSK thus requires explicitly specifying a cipher suite with the ciphers option. More information can be found in the RFC 4279.

    • frameLen: number,
      maxFrameLen: number
      ): number;
  • interface ServerHttp2Session<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>

    The EventEmitter class is defined and exposed by the node:events module:

    import { EventEmitter } from 'node:events';
    

    All EventEmitters emit the event 'newListener' when new listeners are added and 'removeListener' when existing listeners are removed.

    It supports the following option:

    • readonly alpnProtocol?: string

      Value will be undefined if the Http2Session is not yet connected to a socket, h2c if the Http2Session is not connected to a TLSSocket, or will return the value of the connected TLSSocket's own alpnProtocol property.

    • readonly closed: boolean

      Will be true if this Http2Session instance has been closed, otherwise false.

    • readonly connecting: boolean

      Will be true if this Http2Session instance is still connecting, will be set to false before emitting connect event and/or calling the http2.connect callback.

    • readonly destroyed: boolean

      Will be true if this Http2Session instance has been destroyed and must no longer be used, otherwise false.

    • readonly encrypted?: boolean

      Value is undefined if the Http2Session session socket has not yet been connected, true if the Http2Session is connected with a TLSSocket, and false if the Http2Session is connected to any other kind of socket or stream.

    • readonly localSettings: Settings

      A prototype-less object describing the current local settings of this Http2Session. The local settings are local to thisHttp2Session instance.

    • readonly originSet?: string[]

      If the Http2Session is connected to a TLSSocket, the originSet property will return an Array of origins for which the Http2Session may be considered authoritative.

      The originSet property is only available when using a secure TLS connection.

    • readonly pendingSettingsAck: boolean

      Indicates whether the Http2Session is currently waiting for acknowledgment of a sent SETTINGS frame. Will be true after calling the http2session.settings() method. Will be false once all sent SETTINGS frames have been acknowledged.

    • readonly remoteSettings: Settings

      A prototype-less object describing the current remote settings of thisHttp2Session. The remote settings are set by the connected HTTP/2 peer.

    • readonly server: Http2Server<Http1Request, Http1Response, Http2Request, Http2Response> | Http2SecureServer<Http1Request, Http1Response, Http2Request, Http2Response>
    • readonly socket: Socket | TLSSocket

      Returns a Proxy object that acts as a net.Socket (or tls.TLSSocket) but limits available methods to ones safe to use with HTTP/2.

      destroy, emit, end, pause, read, resume, and write will throw an error with code ERR_HTTP2_NO_SOCKET_MANIPULATION. See Http2Session and Sockets for more information.

      setTimeout method will be called on this Http2Session.

      All other interactions will be routed directly to the socket.

    • readonly state: SessionState

      Provides miscellaneous information about the current state of theHttp2Session.

      An object describing the current status of this Http2Session.

    • readonly type: number

      The http2session.type will be equal to http2.constants.NGHTTP2_SESSION_SERVER if this Http2Session instance is a server, and http2.constants.NGHTTP2_SESSION_CLIENT if the instance is a client.

    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'connect',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;

      Alias for emitter.on(eventName, listener).

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.on(eventName, listener).

    • alt: string,
      originOrStream: string | number | URL | AlternativeServiceOptions
      ): void;

      Submits an ALTSVC frame (as defined by RFC 7838) to the connected client.

      import http2 from 'node:http2';
      
      const server = http2.createServer();
      server.on('session', (session) => {
        // Set altsvc for origin https://example.org:80
        session.altsvc('h2=":8000"', 'https://example.org:80');
      });
      
      server.on('stream', (stream) => {
        // Set altsvc for a specific stream
        stream.session.altsvc('h2=":8000"', stream.id);
      });
      

      Sending an ALTSVC frame with a specific stream ID indicates that the alternate service is associated with the origin of the given Http2Stream.

      The alt and origin string must contain only ASCII bytes and are strictly interpreted as a sequence of ASCII bytes. The special value 'clear'may be passed to clear any previously set alternative service for a given domain.

      When a string is passed for the originOrStream argument, it will be parsed as a URL and the origin will be derived. For instance, the origin for the HTTP URL 'https://example.org/foo/bar' is the ASCII string'https://example.org'. An error will be thrown if either the given string cannot be parsed as a URL or if a valid origin cannot be derived.

      A URL object, or any object with an origin property, may be passed asoriginOrStream, in which case the value of the origin property will be used. The value of the origin property must be a properly serialized ASCII origin.

      @param alt

      A description of the alternative service configuration as defined by RFC 7838.

      @param originOrStream

      Either a URL string specifying the origin (or an Object with an origin property) or the numeric identifier of an active Http2Stream as given by the http2stream.id property.

    • callback?: () => void
      ): void;

      Gracefully closes the Http2Session, allowing any existing streams to complete on their own and preventing new Http2Stream instances from being created. Once closed, http2session.destroy()might be called if there are no open Http2Stream instances.

      If specified, the callback function is registered as a handler for the'close' event.

    • error?: Error,
      code?: number
      ): void;

      Immediately terminates the Http2Session and the associated net.Socket or tls.TLSSocket.

      Once destroyed, the Http2Session will emit the 'close' event. If error is not undefined, an 'error' event will be emitted immediately before the 'close' event.

      If there are any remaining open Http2Streams associated with the Http2Session, those will also be destroyed.

      @param error

      An Error object if the Http2Session is being destroyed due to an error.

      @param code

      The HTTP/2 error code to send in the final GOAWAY frame. If unspecified, and error is not undefined, the default is INTERNAL_ERROR, otherwise defaults to NO_ERROR.

    • event: 'connect',
      session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>,
      socket: Socket | TLSSocket
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'stream',
      flags: number
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: string | symbol,
      ...args: any[]
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • code?: number,
      lastStreamID?: number,
      opaqueData?: ArrayBufferView<ArrayBufferLike>
      ): void;

      Transmits a GOAWAY frame to the connected peer without shutting down theHttp2Session.

      @param code

      An HTTP/2 error code

      @param lastStreamID

      The numeric ID of the last processed Http2Stream

      @param opaqueData

      A TypedArray or DataView instance containing additional data to be carried within the GOAWAY frame.

    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'connect',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

    • event: 'connect',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

    • ...origins: string | URL | { origin: string }[]
      ): void;

      Submits an ORIGIN frame (as defined by RFC 8336) to the connected client to advertise the set of origins for which the server is capable of providing authoritative responses.

      import http2 from 'node:http2';
      const options = getSecureOptionsSomehow();
      const server = http2.createSecureServer(options);
      server.on('stream', (stream) => {
        stream.respond();
        stream.end('ok');
      });
      server.on('session', (session) => {
        session.origin('https://example.com', 'https://example.org');
      });
      

      When a string is passed as an origin, it will be parsed as a URL and the origin will be derived. For instance, the origin for the HTTP URL 'https://example.org/foo/bar' is the ASCII string 'https://example.org'. An error will be thrown if either the given string cannot be parsed as a URL or if a valid origin cannot be derived.

      A URL object, or any object with an origin property, may be passed as an origin, in which case the value of the origin property will be used. The value of the origin property must be a properly serialized ASCII origin.

      Alternatively, the origins option may be used when creating a new HTTP/2 server using the http2.createSecureServer() method:

      import http2 from 'node:http2';
      const options = getSecureOptionsSomehow();
      options.origins = ['https://example.com', 'https://example.org'];
      const server = http2.createSecureServer(options);
      server.on('stream', (stream) => {
        stream.respond();
        stream.end('ok');
      });
      
      @param origins

      One or more URL Strings passed as separate arguments.

    • callback: (err: null | Error, duration: number, payload: Buffer) => void
      ): boolean;

      Sends a PING frame to the connected HTTP/2 peer. A callback function must be provided. The method will return true if the PING was sent, false otherwise.

      The maximum number of outstanding (unacknowledged) pings is determined by the maxOutstandingPings configuration option. The default maximum is 10.

      If provided, the payload must be a Buffer, TypedArray, or DataView containing 8 bytes of data that will be transmitted with the PING and returned with the ping acknowledgment.

      The callback will be invoked with three arguments: an error argument that will be null if the PING was successfully acknowledged, a duration argument that reports the number of milliseconds elapsed since the ping was sent and the acknowledgment was received, and a Buffer containing the 8-byte PING payload.

      session.ping(Buffer.from('abcdefgh'), (err, duration, payload) => {
        if (!err) {
          console.log(`Ping acknowledged in ${duration} milliseconds`);
          console.log(`With payload '${payload.toString()}'`);
        }
      });
      

      If the payload argument is not specified, the default payload will be the 64-bit timestamp (little endian) marking the start of the PING duration.

      payload: ArrayBufferView,
      callback: (err: null | Error, duration: number, payload: Buffer) => void
      ): boolean;
    • event: 'connect',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

    • event: 'connect',
      listener: (session: ServerHttp2Session<Http1Request, Http1Response, Http2Request, Http2Response>, socket: Socket | TLSSocket) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'stream',
      listener: (stream: ServerHttp2Stream, headers: IncomingHttpHeaders, flags: number) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • ref(): void;

      Calls ref() on this Http2Session instance's underlying net.Socket.

    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

    • windowSize: number
      ): void;

      Sets the local endpoint's window size. The windowSize is the total window size to set, not the delta.

      import http2 from 'node:http2';
      
      const server = http2.createServer();
      const expectedWindowSize = 2 ** 20;
      server.on('connect', (session) => {
      
        // Set local window size to be 2 ** 20
        session.setLocalWindowSize(expectedWindowSize);
      });
      
    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • msecs: number,
      callback?: () => void
      ): void;

      Used to set a callback function that is called when there is no activity on the Http2Session after msecs milliseconds. The given callback is registered as a listener on the 'timeout' event.

    • settings: Settings,
      callback?: (err: null | Error, settings: Settings, duration: number) => void
      ): void;

      Updates the current local settings for this Http2Session and sends a new SETTINGS frame to the connected HTTP/2 peer.

      Once called, the http2session.pendingSettingsAck property will be true while the session is waiting for the remote peer to acknowledge the new settings.

      The new settings will not become effective until the SETTINGS acknowledgment is received and the 'localSettings' event is emitted. It is possible to send multiple SETTINGS frames while acknowledgment is still pending.

      @param callback

      Callback that is called once the session is connected or right away if the session is already connected.

    • unref(): void;

      Calls unref() on this Http2Sessioninstance's underlying net.Socket.

  • interface ServerHttp2Stream

    Duplex streams are streams that implement both the Readable and Writable interfaces.

    Examples of Duplex streams include:

    • TCP sockets
    • zlib streams
    • crypto streams
    • readonly aborted: boolean

      Set to true if the Http2Stream instance was aborted abnormally. When set, the 'aborted' event will have been emitted.

    • allowHalfOpen: boolean

      If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

      This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

    • readonly bufferSize: number

      This property shows the number of characters currently buffered to be written. See net.Socket.bufferSize for details.

    • readonly closed: boolean

      Set to true if the Http2Stream instance has been closed.

    • readonly destroyed: boolean

      Set to true if the Http2Stream instance has been destroyed and is no longer usable.

    • readonly endAfterHeaders: boolean

      Set to true if the END_STREAM flag was set in the request or response HEADERS frame received, indicating that no additional data should be received and the readable side of the Http2Stream will be closed.

    • readonly errored: null | Error

      Returns error if the stream has been destroyed with an error.

    • readonly headersSent: boolean

      True if headers were sent, false otherwise (read-only).

    • readonly id?: number

      The numeric stream identifier of this Http2Stream instance. Set to undefined if the stream identifier has not yet been assigned.

    • readonly pending: boolean

      Set to true if the Http2Stream instance has not yet been assigned a numeric stream identifier.

    • readonly pushAllowed: boolean

      Read-only property mapped to the SETTINGS_ENABLE_PUSH flag of the remote client's most recent SETTINGS frame. Will be true if the remote peer accepts push streams, false otherwise. Settings are the same for every Http2Stream in the same Http2Session.

    • readable: boolean

      Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

    • readonly readableAborted: boolean

      Returns whether the stream was destroyed or errored before emitting 'end'.

    • readonly readableDidRead: boolean

      Returns whether 'data' has been emitted.

    • readonly readableEncoding: null | BufferEncoding

      Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

    • readonly readableEnded: boolean

      Becomes true when 'end' event is emitted.

    • readonly readableFlowing: null | boolean

      This property reflects the current state of a Readable stream as described in the Three states section.

    • readonly readableHighWaterMark: number

      Returns the value of highWaterMark passed when creating this Readable.

    • readonly readableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

    • readonly readableObjectMode: boolean

      Getter for the property objectMode of a given Readable stream.

    • readonly rstCode: number

      Set to the RST_STREAM error code reported when the Http2Stream is destroyed after either receiving an RST_STREAM frame from the connected peer, calling http2stream.close(), or http2stream.destroy(). Will be undefined if the Http2Stream has not been closed.

    • readonly sentHeaders: OutgoingHttpHeaders

      An object containing the outbound headers sent for this Http2Stream.

    • readonly sentInfoHeaders?: OutgoingHttpHeaders[]

      An array of objects containing the outbound informational (additional) headers sent for this Http2Stream.

    • readonly sentTrailers?: OutgoingHttpHeaders

      An object containing the outbound trailers sent for this HttpStream.

    • readonly session: undefined | Http2Session

      A reference to the Http2Session instance that owns this Http2Stream. The value will be undefined after the Http2Stream instance is destroyed.

    • readonly state: StreamState

      Provides miscellaneous information about the current state of the Http2Stream.

      A current state of this Http2Stream.

    • readonly writable: boolean

      Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

    • readonly writableAborted: boolean

      Returns whether the stream was destroyed or errored before emitting 'finish'.

    • readonly writableCorked: number

      Number of times writable.uncork() needs to be called in order to fully uncork the stream.

    • readonly writableEnded: boolean

      Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

    • readonly writableFinished: boolean

      Is set to true immediately before the 'finish' event is emitted.

    • readonly writableHighWaterMark: number

      Return the value of highWaterMark passed when creating this Writable.

    • readonly writableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

    • readonly writableNeedDrain: boolean

      Is true if the stream's buffer has been full and stream will emit 'drain'.

    • readonly writableObjectMode: boolean

      Getter for the property objectMode of a given Writable stream.

    • callback: (error?: null | Error) => void
      ): void;
    • error: null | Error,
      callback: (error?: null | Error) => void
      ): void;
    • callback: (error?: null | Error) => void
      ): void;
    • size: number
      ): void;
    • chunk: any,
      encoding: BufferEncoding,
      callback: (error?: null | Error) => void
      ): void;
    • chunks: { chunk: any; encoding: BufferEncoding }[],
      callback: (error?: null | Error) => void
      ): void;
    • [Symbol.asyncDispose](): Promise<void>;

      Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

    • [Symbol.asyncIterator](): AsyncIterator<any>;
    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • ): void;

      Sends an additional informational HEADERS frame to the connected HTTP/2 peer.

    • event: 'aborted',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'close',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'drain',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'end',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'error',
      listener: (err: Error) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'finish',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'frameError',
      listener: (frameType: number, errorCode: number) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'streamClosed',
      listener: (code: number) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'timeout',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'trailers',
      listener: (trailers: IncomingHttpHeaders, flags: number) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'wantTrailers',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
    • options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

      @returns

      a stream of indexed pairs.

    • code?: number,
      callback?: () => void
      ): void;

      Closes the Http2Stream instance by sending an RST_STREAM frame to the connected HTTP/2 peer.

      @param code

      Unsigned 32-bit integer identifying the error code.

      @param callback

      An optional function registered to listen for the 'close' event.

    • compose<T extends ReadableStream>(
      stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
      options?: { signal: AbortSignal }
      ): T;
    • cork(): void;

      The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

      The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

      See also: writable.uncork(), writable._writev().

    • error?: Error
      ): this;

      Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

      Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

      Implementors should not override this method, but instead implement readable._destroy().

      @param error

      Error which will be passed as payload in 'error' event

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks dropped from the start.

      @param limit

      the number of chunks to drop from the readable.

      @returns

      a stream with limit chunks dropped from the start.

    • event: 'aborted'
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'close'
      ): boolean;
      event: 'data',
      chunk: string | Buffer<ArrayBufferLike>
      ): boolean;
      event: 'drain'
      ): boolean;
      event: 'end'
      ): boolean;
      event: 'error',
      err: Error
      ): boolean;
      event: 'finish'
      ): boolean;
      event: 'frameError',
      frameType: number,
      errorCode: number
      ): boolean;
      event: 'pipe',
      ): boolean;
      event: 'unpipe',
      ): boolean;
      event: 'streamClosed',
      code: number
      ): boolean;
      event: 'timeout'
      ): boolean;
      event: 'trailers',
      flags: number
      ): boolean;
      event: 'wantTrailers'
      ): boolean;
      event: string | symbol,
      ...args: any[]
      ): boolean;
    • cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      chunk: any,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      chunk: any,
      encoding: BufferEncoding,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding if chunk is a string

    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for every one of the chunks.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions

      This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

      @param fn

      a function to filter chunks from the stream. Async or not.

      @returns

      a stream filtered with the predicate fn.

    • find<T>(
      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
      options?: ArrayOptions
      ): Promise<undefined | T>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<any>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

      It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

      @param fn

      a function to map over every chunk in the stream. May be async. May be a stream or generator.

      @returns

      a stream flat-mapped with the function fn.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
      options?: ArrayOptions
      ): Promise<void>;

      This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

      This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

      This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise for when the stream has finished.

    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • isPaused(): boolean;

      The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

      const readable = new stream.Readable();
      
      readable.isPaused(); // === false
      readable.pause();
      readable.isPaused(); // === true
      readable.resume();
      readable.isPaused(); // === false
      
    • options?: { destroyOnReturn: boolean }
      ): AsyncIterator<any>;

      The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

      @param fn

      a function to map over every chunk in the stream. Async or not.

      @returns

      a stream mapped with the function fn.

    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'aborted',
      listener: () => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'close',
      listener: () => void
      ): this;
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'frameError',
      listener: (frameType: number, errorCode: number) => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: 'streamClosed',
      listener: (code: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: 'trailers',
      listener: (trailers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'wantTrailers',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'aborted',
      listener: () => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'close',
      listener: () => void
      ): this;
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'frameError',
      listener: (frameType: number, errorCode: number) => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: 'streamClosed',
      listener: (code: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: 'trailers',
      listener: (trailers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'wantTrailers',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • pause(): this;

      The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

      const readable = getReadableStreamSomehow();
      readable.on('data', (chunk) => {
        console.log(`Received ${chunk.length} bytes of data.`);
        readable.pause();
        console.log('There will be no additional data for 1 second.');
        setTimeout(() => {
          console.log('Now data will start flowing again.');
          readable.resume();
        }, 1000);
      });
      

      The readable.pause() method has no effect if there is a 'readable' event listener.

    • pipe<T extends WritableStream>(
      destination: T,
      options?: { end: boolean }
      ): T;
    • event: 'aborted',
      listener: () => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'close',
      listener: () => void
      ): this;
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'frameError',
      listener: (frameType: number, errorCode: number) => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: 'streamClosed',
      listener: (code: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: 'trailers',
      listener: (trailers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'wantTrailers',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'aborted',
      listener: () => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'close',
      listener: () => void
      ): this;
      event: 'data',
      listener: (chunk: string | Buffer<ArrayBufferLike>) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'frameError',
      listener: (frameType: number, errorCode: number) => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: 'streamClosed',
      listener: (code: number) => void
      ): this;
      event: 'timeout',
      listener: () => void
      ): this;
      event: 'trailers',
      listener: (trailers: IncomingHttpHeaders, flags: number) => void
      ): this;
      event: 'wantTrailers',
      listener: () => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • ): void;

      Updates the priority for this Http2Stream instance.

    • chunk: any,
      encoding?: BufferEncoding
      ): boolean;
    • callback?: (err: null | Error, pushStream: ServerHttp2Stream, headers: OutgoingHttpHeaders) => void
      ): void;

      Initiates a push stream. The callback is invoked with the new Http2Stream instance created for the push stream passed as the second argument, or an Error passed as the first argument.

      import http2 from 'node:http2';
      const server = http2.createServer();
      server.on('stream', (stream) => {
        stream.respond({ ':status': 200 });
        stream.pushStream({ ':path': '/' }, (err, pushStream, headers) => {
          if (err) throw err;
          pushStream.respond({ ':status': 200 });
          pushStream.end('some pushed data');
        });
        stream.end('some data');
      });
      

      Setting the weight of a push stream is not allowed in the HEADERS frame. Pass a weight value to http2stream.priority with the silent option set to true to enable server-side bandwidth balancing between concurrent streams.

      Calling http2stream.pushStream() from within a pushed stream is not permitted and will throw an error.

      @param callback

      Callback that is called once the push stream has been initiated.

      callback?: (err: null | Error, pushStream: ServerHttp2Stream, headers: OutgoingHttpHeaders) => void
      ): void;
    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • size?: number
      ): any;

      The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

      The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

      If the size argument is not specified, all of the data contained in the internal buffer will be returned.

      The size argument must be less than or equal to 1 GiB.

      The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

      const readable = getReadableStreamSomehow();
      
      // 'readable' may be triggered multiple times as data is buffered in
      readable.on('readable', () => {
        let chunk;
        console.log('Stream is readable (new data received in buffer)');
        // Use a loop to make sure we read all currently available data
        while (null !== (chunk = readable.read())) {
          console.log(`Read ${chunk.length} bytes of data...`);
        }
      });
      
      // 'end' will be triggered once when there is no more data available
      readable.on('end', () => {
        console.log('Reached end of stream.');
      });
      

      Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

      Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

      const chunks = [];
      
      readable.on('readable', () => {
        let chunk;
        while (null !== (chunk = readable.read())) {
          chunks.push(chunk);
        }
      });
      
      readable.on('end', () => {
        const content = chunks.join('');
      });
      

      A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

      If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

      Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

      @param size

      Optional argument to specify how much data to read.

    • reduce<T = any>(
      fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial?: undefined,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

      reduce<T = any>(
      fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial: T,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • event: 'close',
      listener: () => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • ): void;
      import http2 from 'node:http2';
      const server = http2.createServer();
      server.on('stream', (stream) => {
        stream.respond({ ':status': 200 });
        stream.end('some data');
      });
      

      Initiates a response. When the options.waitForTrailers option is set, the 'wantTrailers' event will be emitted immediately after queuing the last chunk of payload data to be sent. The http2stream.sendTrailers() method can then be used to send trailing header fields to the peer.

      When options.waitForTrailers is set, the Http2Stream will not automatically close when the final DATA frame is transmitted. User code must call either http2stream.sendTrailers() or http2stream.close() to close the Http2Stream.

      import http2 from 'node:http2';
      const server = http2.createServer();
      server.on('stream', (stream) => {
        stream.respond({ ':status': 200 }, { waitForTrailers: true });
        stream.on('wantTrailers', () => {
          stream.sendTrailers({ ABC: 'some value to send' });
        });
        stream.end('some data');
      });
      
    • fd: number | FileHandle,
      ): void;

      Initiates a response whose data is read from the given file descriptor. No validation is performed on the given file descriptor. If an error occurs while attempting to read data using the file descriptor, the Http2Stream will be closed using an RST_STREAM frame using the standard INTERNAL_ERROR code.

      When used, the Http2Stream object's Duplex interface will be closed automatically.

      import http2 from 'node:http2';
      import fs from 'node:fs';
      
      const server = http2.createServer();
      server.on('stream', (stream) => {
        const fd = fs.openSync('/some/file', 'r');
      
        const stat = fs.fstatSync(fd);
        const headers = {
          'content-length': stat.size,
          'last-modified': stat.mtime.toUTCString(),
          'content-type': 'text/plain; charset=utf-8',
        };
        stream.respondWithFD(fd, headers);
        stream.on('close', () => fs.closeSync(fd));
      });
      

      The optional options.statCheck function may be specified to give user code an opportunity to set additional content headers based on the fs.Stat details of the given fd. If the statCheck function is provided, the http2stream.respondWithFD() method will perform an fs.fstat() call to collect details on the provided file descriptor.

      The offset and length options may be used to limit the response to a specific range subset. This can be used, for instance, to support HTTP Range requests.

      The file descriptor or FileHandle is not closed when the stream is closed, so it will need to be closed manually once it is no longer needed. Using the same file descriptor concurrently for multiple streams is not supported and may result in data loss. Re-using a file descriptor after a stream has finished is supported.

      When the options.waitForTrailers option is set, the 'wantTrailers' event will be emitted immediately after queuing the last chunk of payload data to be sent. The http2stream.sendTrailers() method can then be used to sent trailing header fields to the peer.

      When options.waitForTrailers is set, the Http2Stream will not automatically close when the final DATA frame is transmitted. User code must call either http2stream.sendTrailers() or http2stream.close() to close the Http2Stream.

      import http2 from 'node:http2';
      import fs from 'node:fs';
      
      const server = http2.createServer();
      server.on('stream', (stream) => {
        const fd = fs.openSync('/some/file', 'r');
      
        const stat = fs.fstatSync(fd);
        const headers = {
          'content-length': stat.size,
          'last-modified': stat.mtime.toUTCString(),
          'content-type': 'text/plain; charset=utf-8',
        };
        stream.respondWithFD(fd, headers, { waitForTrailers: true });
        stream.on('wantTrailers', () => {
          stream.sendTrailers({ ABC: 'some value to send' });
        });
      
        stream.on('close', () => fs.closeSync(fd));
      });
      
      @param fd

      A readable file descriptor.

    • path: string,
      ): void;

      Sends a regular file as the response. The path must specify a regular file or an 'error' event will be emitted on the Http2Stream object.

      When used, the Http2Stream object's Duplex interface will be closed automatically.

      The optional options.statCheck function may be specified to give user code an opportunity to set additional content headers based on the fs.Stat details of the given file:

      If an error occurs while attempting to read the file data, the Http2Stream will be closed using an RST_STREAM frame using the standard INTERNAL_ERROR code. If the onError callback is defined, then it will be called. Otherwise, the stream will be destroyed.

      Example using a file path:

      import http2 from 'node:http2';
      const server = http2.createServer();
      server.on('stream', (stream) => {
        function statCheck(stat, headers) {
          headers['last-modified'] = stat.mtime.toUTCString();
        }
      
        function onError(err) {
          // stream.respond() can throw if the stream has been destroyed by
          // the other side.
          try {
            if (err.code === 'ENOENT') {
              stream.respond({ ':status': 404 });
            } else {
              stream.respond({ ':status': 500 });
            }
          } catch (err) {
            // Perform actual error handling.
            console.error(err);
          }
          stream.end();
        }
      
        stream.respondWithFile('/some/file',
                               { 'content-type': 'text/plain; charset=utf-8' },
                               { statCheck, onError });
      });
      

      The options.statCheck function may also be used to cancel the send operation by returning false. For instance, a conditional request may check the stat results to determine if the file has been modified to return an appropriate 304 response:

      import http2 from 'node:http2';
      const server = http2.createServer();
      server.on('stream', (stream) => {
        function statCheck(stat, headers) {
          // Check the stat here...
          stream.respond({ ':status': 304 });
          return false; // Cancel the send operation
        }
        stream.respondWithFile('/some/file',
                               { 'content-type': 'text/plain; charset=utf-8' },
                               { statCheck });
      });
      

      The content-length header field will be automatically set.

      The offset and length options may be used to limit the response to a specific range subset. This can be used, for instance, to support HTTP Range requests.

      The options.onError function may also be used to handle all the errors that could happen before the delivery of the file is initiated. The default behavior is to destroy the stream.

      When the options.waitForTrailers option is set, the 'wantTrailers' event will be emitted immediately after queuing the last chunk of payload data to be sent. The http2stream.sendTrailers() method can then be used to sent trailing header fields to the peer.

      When options.waitForTrailers is set, the Http2Stream will not automatically close when the final DATA frame is transmitted. User code must call eitherhttp2stream.sendTrailers() or http2stream.close() to close theHttp2Stream.

      import http2 from 'node:http2';
      const server = http2.createServer();
      server.on('stream', (stream) => {
        stream.respondWithFile('/some/file',
                               { 'content-type': 'text/plain; charset=utf-8' },
                               { waitForTrailers: true });
        stream.on('wantTrailers', () => {
          stream.sendTrailers({ ABC: 'some value to send' });
        });
      });
      
    • resume(): this;

      The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

      The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

      getReadableStreamSomehow()
        .resume()
        .on('end', () => {
          console.log('Reached the end, but did not read anything.');
        });
      

      The readable.resume() method has no effect if there is a 'readable' event listener.

    • ): void;

      Sends a trailing HEADERS frame to the connected HTTP/2 peer. This method will cause the Http2Stream to be immediately closed and must only be called after the 'wantTrailers' event has been emitted. When sending a request or sending a response, the options.waitForTrailers option must be set in order to keep the Http2Stream open after the final DATA frame so that trailers can be sent.

      import http2 from 'node:http2';
      const server = http2.createServer();
      server.on('stream', (stream) => {
        stream.respond(undefined, { waitForTrailers: true });
        stream.on('wantTrailers', () => {
          stream.sendTrailers({ xyz: 'abc' });
        });
        stream.end('Hello World');
      });
      

      The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-header fields (e.g. ':method', ':path', etc).

    • encoding: BufferEncoding
      ): this;

      The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

      @param encoding

      The new default encoding

    • encoding: BufferEncoding
      ): this;

      The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

      By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

      The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

      const readable = getReadableStreamSomehow();
      readable.setEncoding('utf8');
      readable.on('data', (chunk) => {
        assert.equal(typeof chunk, 'string');
        console.log('Got %d characters of string data:', chunk.length);
      });
      
      @param encoding

      The encoding to use.

    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • msecs: number,
      callback?: () => void
      ): void;
      import http2 from 'node:http2';
      const client = http2.connect('http://example.org:8000');
      const { NGHTTP2_CANCEL } = http2.constants;
      const req = client.request({ ':path': '/' });
      
      // Cancel the stream if there's no activity after 5 seconds
      req.setTimeout(5000, () => req.close(NGHTTP2_CANCEL));
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks.

      @param limit

      the number of chunks to take from the readable.

      @returns

      a stream with limit chunks taken.

    • options?: Pick<ArrayOptions, 'signal'>
      ): Promise<any[]>;

      This method allows easily obtaining the contents of a stream.

      As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

      @returns

      a promise containing an array with the contents of the stream.

    • uncork(): void;

      The writable.uncork() method flushes all data buffered since cork was called.

      When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

      stream.cork();
      stream.write('some ');
      stream.write('data ');
      process.nextTick(() => stream.uncork());
      

      If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

      stream.cork();
      stream.write('some ');
      stream.cork();
      stream.write('data ');
      process.nextTick(() => {
        stream.uncork();
        // The data will not be flushed until uncork() is called a second time.
        stream.uncork();
      });
      

      See also: writable.cork().

    • destination?: WritableStream
      ): this;

      The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

      If the destination is not specified, then all pipes are detached.

      If the destination is specified, but no pipe is set up for it, then the method does nothing.

      import fs from 'node:fs';
      const readable = getReadableStreamSomehow();
      const writable = fs.createWriteStream('file.txt');
      // All the data from readable goes into 'file.txt',
      // but only for the first second.
      readable.pipe(writable);
      setTimeout(() => {
        console.log('Stop writing to file.txt.');
        readable.unpipe(writable);
        console.log('Manually close the file stream.');
        writable.end();
      }, 1000);
      
      @param destination

      Optional specific stream to unpipe

    • chunk: any,
      encoding?: BufferEncoding
      ): void;

      Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

      The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

      The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

      Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

      // Pull off a header delimited by \n\n.
      // Use unshift() if we get too much.
      // Call the callback with (error, header, stream).
      import { StringDecoder } from 'node:string_decoder';
      function parseHeader(stream, callback) {
        stream.on('error', callback);
        stream.on('readable', onReadable);
        const decoder = new StringDecoder('utf8');
        let header = '';
        function onReadable() {
          let chunk;
          while (null !== (chunk = stream.read())) {
            const str = decoder.write(chunk);
            if (str.includes('\n\n')) {
              // Found the header boundary.
              const split = str.split(/\n\n/);
              header += split.shift();
              const remaining = split.join('\n\n');
              const buf = Buffer.from(remaining, 'utf8');
              stream.removeListener('error', callback);
              // Remove the 'readable' listener before unshifting.
              stream.removeListener('readable', onReadable);
              if (buf.length)
                stream.unshift(buf);
              // Now the body of the message can be read from the stream.
              callback(null, header, stream);
              return;
            }
            // Still reading the header.
            header += str;
          }
        }
      }
      

      Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

      @param chunk

      Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

      @param encoding

      Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

    • stream: ReadableStream
      ): this;

      Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

      When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

      It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

      import { OldReader } from './old-api-module.js';
      import { Readable } from 'node:stream';
      const oreader = new OldReader();
      const myReader = new Readable().wrap(oreader);
      
      myReader.on('readable', () => {
        myReader.read(); // etc.
      });
      
      @param stream

      An "old style" readable stream

    • chunk: any,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

      chunk: any,
      encoding: BufferEncoding,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding, if chunk is a string.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

  • interface ServerOptions<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>

  • interface ServerSessionOptions<Http1Request extends typeof IncomingMessage = typeof IncomingMessage, Http1Response extends typeof ServerResponse = typeof ServerResponse, Http2Request extends typeof Http2ServerRequest = typeof Http2ServerRequest, Http2Response extends typeof Http2ServerResponse = typeof Http2ServerResponse>

  • interface ServerStreamFileResponseOptions

  • interface ServerStreamFileResponseOptionsWithError

  • interface SessionOptions

  • interface StatOptions

  • interface StreamPriorityOptions

  • interface StreamState