Bun

class

tls.TLSSocket

class TLSSocket

Performs transparent encryption of written data and all required TLS negotiation.

Instances of tls.TLSSocket implement the duplex Stream interface.

Methods that return TLS connection metadata (e.g.TLSSocket.getPeerCertificate) will only return data while the connection is open.

  • allowHalfOpen: boolean

    If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

    This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

  • alpnProtocol: null | string | false

    String containing the selected ALPN protocol. Before a handshake has completed, this value is always null. When a handshake is completed but not ALPN protocol was selected, tlsSocket.alpnProtocol equals false.

  • authorizationError: Error

    Returns the reason why the peer's certificate was not been verified. This property is set only when tlsSocket.authorized === false.

  • authorized: boolean

    This property is true if the peer certificate was signed by one of the CAs specified when creating the tls.TLSSocket instance, otherwise false.

  • readonly autoSelectFamilyAttemptedAddresses: string[]

    This property is only present if the family autoselection algorithm is enabled in socket.connect(options) and it is an array of the addresses that have been attempted.

    Each address is a string in the form of $IP:$PORT. If the connection was successful, then the last address is the one that the socket is currently connected to.

  • readonly bytesRead: number

    The amount of received bytes.

  • readonly bytesWritten: number

    The amount of bytes sent.

  • readonly closed: boolean

    Is true after 'close' has been emitted.

  • readonly connecting: boolean

    If true, socket.connect(options[, connectListener]) was called and has not yet finished. It will stay true until the socket becomes connected, then it is set to false and the 'connect' event is emitted. Note that the socket.connect(options[, connectListener]) callback is a listener for the 'connect' event.

  • readonly destroyed: boolean

    See writable.destroyed for further details.

  • encrypted: true

    Always returns true. This may be used to distinguish TLS sockets from regularnet.Socket instances.

  • readonly errored: null | Error

    Returns error if the stream has been destroyed with an error.

  • readonly localAddress?: string

    The string representation of the local IP address the remote client is connecting on. For example, in a server listening on '0.0.0.0', if a client connects on '192.168.1.1', the value of socket.localAddress would be'192.168.1.1'.

  • readonly localFamily?: string

    The string representation of the local IP family. 'IPv4' or 'IPv6'.

  • readonly localPort?: number

    The numeric representation of the local port. For example, 80 or 21.

  • readonly pending: boolean

    This is true if the socket is not connected yet, either because .connect()has not yet been called or because it is still in the process of connecting (see socket.connecting).

  • readable: boolean

    Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

  • readonly readableAborted: boolean

    Returns whether the stream was destroyed or errored before emitting 'end'.

  • readonly readableDidRead: boolean

    Returns whether 'data' has been emitted.

  • readonly readableEncoding: null | BufferEncoding

    Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

  • readonly readableEnded: boolean

    Becomes true when 'end' event is emitted.

  • readonly readableFlowing: null | boolean

    This property reflects the current state of a Readable stream as described in the Three states section.

  • readonly readableHighWaterMark: number

    Returns the value of highWaterMark passed when creating this Readable.

  • readonly readableLength: number

    This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

  • readonly readableObjectMode: boolean

    Getter for the property objectMode of a given Readable stream.

  • readonly readyState: SocketReadyState

    This property represents the state of the connection as a string.

    • If the stream is connecting socket.readyState is opening.
    • If the stream is readable and writable, it is open.
    • If the stream is readable and not writable, it is readOnly.
    • If the stream is not readable and writable, it is writeOnly.
  • readonly remoteAddress?: string

    The string representation of the remote IP address. For example,'74.125.127.100' or '2001:4860:a005::68'. Value may be undefined if the socket is destroyed (for example, if the client disconnected).

  • readonly remoteFamily?: string

    The string representation of the remote IP family. 'IPv4' or 'IPv6'. Value may be undefined if the socket is destroyed (for example, if the client disconnected).

  • readonly remotePort?: number

    The numeric representation of the remote port. For example, 80 or 21. Value may be undefined if the socket is destroyed (for example, if the client disconnected).

  • readonly timeout?: number

    The socket timeout in milliseconds as set by socket.setTimeout(). It is undefined if a timeout has not been set.

  • readonly writable: boolean

    Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

  • readonly writableCorked: number

    Number of times writable.uncork() needs to be called in order to fully uncork the stream.

  • readonly writableEnded: boolean

    Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

  • readonly writableFinished: boolean

    Is set to true immediately before the 'finish' event is emitted.

  • readonly writableHighWaterMark: number

    Return the value of highWaterMark passed when creating this Writable.

  • readonly writableLength: number

    This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

  • readonly writableNeedDrain: boolean

    Is true if the stream's buffer has been full and stream will emit 'drain'.

  • readonly writableObjectMode: boolean

    Getter for the property objectMode of a given Writable stream.

  • static captureRejections: boolean

    Value: boolean

    Change the default captureRejections option on all new EventEmitter objects.

  • readonly static captureRejectionSymbol: typeof captureRejectionSymbol

    Value: Symbol.for('nodejs.rejection')

    See how to write a custom rejection handler.

  • static defaultMaxListeners: number

    By default, a maximum of 10 listeners can be registered for any single event. This limit can be changed for individual EventEmitter instances using the emitter.setMaxListeners(n) method. To change the default for allEventEmitter instances, the events.defaultMaxListeners property can be used. If this value is not a positive number, a RangeError is thrown.

    Take caution when setting the events.defaultMaxListeners because the change affects all EventEmitter instances, including those created before the change is made. However, calling emitter.setMaxListeners(n) still has precedence over events.defaultMaxListeners.

    This is not a hard limit. The EventEmitter instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any single EventEmitter, the emitter.getMaxListeners() and emitter.setMaxListeners() methods can be used to temporarily avoid this warning:

    import { EventEmitter } from 'node:events';
    const emitter = new EventEmitter();
    emitter.setMaxListeners(emitter.getMaxListeners() + 1);
    emitter.once('event', () => {
      // do stuff
      emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
    });
    

    The --trace-warnings command-line flag can be used to display the stack trace for such warnings.

    The emitted warning can be inspected with process.on('warning') and will have the additional emitter, type, and count properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Its name property is set to 'MaxListenersExceededWarning'.

  • readonly static errorMonitor: typeof errorMonitor

    This symbol shall be used to install a listener for only monitoring 'error' events. Listeners installed using this symbol are called before the regular 'error' listeners are called.

    Installing a listener using this symbol does not change the behavior once an 'error' event is emitted. Therefore, the process will still crash if no regular 'error' listener is installed.

  • callback: (error?: null | Error) => void
    ): void;
  • error: null | Error,
    callback: (error?: null | Error) => void
    ): void;
  • callback: (error?: null | Error) => void
    ): void;
  • size: number
    ): void;
  • chunk: any,
    encoding: BufferEncoding,
    callback: (error?: null | Error) => void
    ): void;
  • chunks: { chunk: any; encoding: BufferEncoding }[],
    callback: (error?: null | Error) => void
    ): void;
  • [Symbol.asyncDispose](): Promise<void>;

    Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

  • [Symbol.asyncIterator](): AsyncIterator<any>;
  • error: Error,
    event: string | symbol,
    ...args: AnyRest
    ): void;
  • event: string,
    listener: (...args: any[]) => void
    ): this;

    events.EventEmitter

    1. close
    2. connect
    3. connectionAttempt
    4. connectionAttemptFailed
    5. connectionAttemptTimeout
    6. data
    7. drain
    8. end
    9. error
    10. lookup
    11. ready
    12. timeout
    event: 'OCSPResponse',
    listener: (response: Buffer) => void
    ): this;

    events.EventEmitter

    1. close
    2. connect
    3. connectionAttempt
    4. connectionAttemptFailed
    5. connectionAttemptTimeout
    6. data
    7. drain
    8. end
    9. error
    10. lookup
    11. ready
    12. timeout
    event: 'secureConnect',
    listener: () => void
    ): this;

    events.EventEmitter

    1. close
    2. connect
    3. connectionAttempt
    4. connectionAttemptFailed
    5. connectionAttemptTimeout
    6. data
    7. drain
    8. end
    9. error
    10. lookup
    11. ready
    12. timeout
    event: 'session',
    listener: (session: Buffer) => void
    ): this;

    events.EventEmitter

    1. close
    2. connect
    3. connectionAttempt
    4. connectionAttemptFailed
    5. connectionAttemptTimeout
    6. data
    7. drain
    8. end
    9. error
    10. lookup
    11. ready
    12. timeout
    event: 'keylog',
    listener: (line: Buffer) => void
    ): this;

    events.EventEmitter

    1. close
    2. connect
    3. connectionAttempt
    4. connectionAttemptFailed
    5. connectionAttemptTimeout
    6. data
    7. drain
    8. end
    9. error
    10. lookup
    11. ready
    12. timeout
  • Returns the bound address, the address family name and port of the socket as reported by the operating system:{ port: 12346, family: 'IPv4', address: '127.0.0.1' }

  • options?: Pick<ArrayOptions, 'signal'>

    This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

    @returns

    a stream of indexed pairs.

  • compose<T extends ReadableStream>(
    stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
    options?: { signal: AbortSignal }
    ): T;
  • connectionListener?: () => void
    ): this;

    Initiate a connection on a given socket.

    Possible signatures:

    • socket.connect(options[, connectListener])
    • socket.connect(path[, connectListener]) for IPC connections.
    • socket.connect(port[, host][, connectListener]) for TCP connections.
    • Returns: net.Socket The socket itself.

    This function is asynchronous. When the connection is established, the 'connect' event will be emitted. If there is a problem connecting, instead of a 'connect' event, an 'error' event will be emitted with the error passed to the 'error' listener. The last parameter connectListener, if supplied, will be added as a listener for the 'connect' event once.

    This function should only be used for reconnecting a socket after'close' has been emitted or otherwise it may lead to undefined behavior.

    port: number,
    host: string,
    connectionListener?: () => void
    ): this;

    Initiate a connection on a given socket.

    Possible signatures:

    • socket.connect(options[, connectListener])
    • socket.connect(path[, connectListener]) for IPC connections.
    • socket.connect(port[, host][, connectListener]) for TCP connections.
    • Returns: net.Socket The socket itself.

    This function is asynchronous. When the connection is established, the 'connect' event will be emitted. If there is a problem connecting, instead of a 'connect' event, an 'error' event will be emitted with the error passed to the 'error' listener. The last parameter connectListener, if supplied, will be added as a listener for the 'connect' event once.

    This function should only be used for reconnecting a socket after'close' has been emitted or otherwise it may lead to undefined behavior.

    port: number,
    connectionListener?: () => void
    ): this;

    Initiate a connection on a given socket.

    Possible signatures:

    • socket.connect(options[, connectListener])
    • socket.connect(path[, connectListener]) for IPC connections.
    • socket.connect(port[, host][, connectListener]) for TCP connections.
    • Returns: net.Socket The socket itself.

    This function is asynchronous. When the connection is established, the 'connect' event will be emitted. If there is a problem connecting, instead of a 'connect' event, an 'error' event will be emitted with the error passed to the 'error' listener. The last parameter connectListener, if supplied, will be added as a listener for the 'connect' event once.

    This function should only be used for reconnecting a socket after'close' has been emitted or otherwise it may lead to undefined behavior.

    path: string,
    connectionListener?: () => void
    ): this;

    Initiate a connection on a given socket.

    Possible signatures:

    • socket.connect(options[, connectListener])
    • socket.connect(path[, connectListener]) for IPC connections.
    • socket.connect(port[, host][, connectListener]) for TCP connections.
    • Returns: net.Socket The socket itself.

    This function is asynchronous. When the connection is established, the 'connect' event will be emitted. If there is a problem connecting, instead of a 'connect' event, an 'error' event will be emitted with the error passed to the 'error' listener. The last parameter connectListener, if supplied, will be added as a listener for the 'connect' event once.

    This function should only be used for reconnecting a socket after'close' has been emitted or otherwise it may lead to undefined behavior.

  • cork(): void;

    The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

    The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

    See also: writable.uncork(), writable._writev().

  • error?: Error
    ): this;

    Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

    Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

    Implementors should not override this method, but instead implement readable._destroy().

    @param error

    Error which will be passed as payload in 'error' event

  • destroySoon(): void;

    Destroys the socket after all data is written. If the finish event was already emitted the socket is destroyed immediately. If the socket is still writable it implicitly calls socket.end().

  • Disables TLS renegotiation for this TLSSocket instance. Once called, attempts to renegotiate will trigger an 'error' event on the TLSSocket.

  • limit: number,
    options?: Pick<ArrayOptions, 'signal'>

    This method returns a new stream with the first limit chunks dropped from the start.

    @param limit

    the number of chunks to drop from the readable.

    @returns

    a stream with limit chunks dropped from the start.

  • event: string | symbol,
    ...args: any[]
    ): boolean;

    Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

    Returns true if the event had listeners, false otherwise.

    import { EventEmitter } from 'node:events';
    const myEmitter = new EventEmitter();
    
    // First listener
    myEmitter.on('event', function firstListener() {
      console.log('Helloooo! first listener');
    });
    // Second listener
    myEmitter.on('event', function secondListener(arg1, arg2) {
      console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
    });
    // Third listener
    myEmitter.on('event', function thirdListener(...args) {
      const parameters = args.join(', ');
      console.log(`event with parameters ${parameters} in third listener`);
    });
    
    console.log(myEmitter.listeners('event'));
    
    myEmitter.emit('event', 1, 2, 3, 4, 5);
    
    // Prints:
    // [
    //   [Function: firstListener],
    //   [Function: secondListener],
    //   [Function: thirdListener]
    // ]
    // Helloooo! first listener
    // event with parameters 1, 2 in second listener
    // event with parameters 1, 2, 3, 4, 5 in third listener
    
    event: 'OCSPResponse',
    response: Buffer
    ): boolean;
    event: 'secureConnect'
    ): boolean;
    event: 'session',
    session: Buffer
    ): boolean;
    event: 'keylog',
    line: Buffer
    ): boolean;
  • enableTrace(): void;

    When enabled, TLS packet trace information is written to stderr. This can be used to debug TLS connection problems.

    The format of the output is identical to the output ofopenssl s_client -trace or openssl s_server -trace. While it is produced by OpenSSL's SSL_trace() function, the format is undocumented, can change without notice, and should not be relied on.

  • callback?: () => void
    ): this;

    Half-closes the socket. i.e., it sends a FIN packet. It is possible the server will still send some data.

    See writable.end() for further details.

    @param callback

    Optional callback for when the socket is finished.

    @returns

    The socket itself.

    buffer: string | Uint8Array<ArrayBufferLike>,
    callback?: () => void
    ): this;

    Half-closes the socket. i.e., it sends a FIN packet. It is possible the server will still send some data.

    See writable.end() for further details.

    @param callback

    Optional callback for when the socket is finished.

    @returns

    The socket itself.

    str: string | Uint8Array<ArrayBufferLike>,
    encoding?: BufferEncoding,
    callback?: () => void
    ): this;

    Half-closes the socket. i.e., it sends a FIN packet. It is possible the server will still send some data.

    See writable.end() for further details.

    @param encoding

    Only used when data is string.

    @param callback

    Optional callback for when the socket is finished.

    @returns

    The socket itself.

  • eventNames(): string | symbol[];

    Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

    import { EventEmitter } from 'node:events';
    
    const myEE = new EventEmitter();
    myEE.on('foo', () => {});
    myEE.on('bar', () => {});
    
    const sym = Symbol('symbol');
    myEE.on(sym, () => {});
    
    console.log(myEE.eventNames());
    // Prints: [ 'foo', 'bar', Symbol(symbol) ]
    
  • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
    options?: ArrayOptions
    ): Promise<boolean>;

    This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

    @param fn

    a function to call on each chunk of the stream. Async or not.

    @returns

    a promise evaluating to true if fn returned a truthy value for every one of the chunks.

  • length: number,
    label: string,
    context: Buffer
    ): Buffer;

    Keying material is used for validations to prevent different kind of attacks in network protocols, for example in the specifications of IEEE 802.1X.

    Example

    const keyingMaterial = tlsSocket.exportKeyingMaterial(
      128,
      'client finished');
    
    /*
     Example return value of keyingMaterial:
     <Buffer 76 26 af 99 c5 56 8e 42 09 91 ef 9f 93 cb ad 6c 7b 65 f8 53 f1 d8 d9
        12 5a 33 b8 b5 25 df 7b 37 9f e0 e2 4f b8 67 83 a3 2f cd 5d 41 42 4c 91
        74 ef 2c ... 78 more bytes>
    
    

    See the OpenSSL SSL_export_keying_material documentation for more information.

    @param length

    number of bytes to retrieve from keying material

    @param label
    @param context

    Optionally provide a context.

    @returns

    requested bytes of the keying material

  • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
    options?: ArrayOptions

    This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

    @param fn

    a function to filter chunks from the stream. Async or not.

    @returns

    a stream filtered with the predicate fn.

  • find<T>(
    fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
    options?: ArrayOptions
    ): Promise<undefined | T>;

    This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

    @param fn

    a function to call on each chunk of the stream. Async or not.

    @returns

    a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

    fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
    options?: ArrayOptions
    ): Promise<any>;

    This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

    @param fn

    a function to call on each chunk of the stream. Async or not.

    @returns

    a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

  • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
    options?: ArrayOptions

    This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

    It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

    @param fn

    a function to map over every chunk in the stream. May be async. May be a stream or generator.

    @returns

    a stream flat-mapped with the function fn.

  • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
    options?: ArrayOptions
    ): Promise<void>;

    This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

    This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

    This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

    @param fn

    a function to call on each chunk of the stream. Async or not.

    @returns

    a promise for when the stream has finished.

  • getCertificate(): null | object | PeerCertificate;

    Returns an object representing the local certificate. The returned object has some properties corresponding to the fields of the certificate.

    See TLSSocket.getPeerCertificate for an example of the certificate structure.

    If there is no local certificate, an empty object will be returned. If the socket has been destroyed, null will be returned.

  • Returns an object containing information on the negotiated cipher suite.

    For example, a TLSv1.2 protocol with AES256-SHA cipher:

    {
        "name": "AES256-SHA",
        "standardName": "TLS_RSA_WITH_AES_256_CBC_SHA",
        "version": "SSLv3"
    }
    

    See SSL_CIPHER_get_name for more information.

  • Returns an object representing the type, name, and size of parameter of an ephemeral key exchange in perfect forward secrecy on a client connection. It returns an empty object when the key exchange is not ephemeral. As this is only supported on a client socket; null is returned if called on a server socket. The supported types are 'DH' and 'ECDH'. The name property is available only when type is 'ECDH'.

    For example: { type: 'ECDH', name: 'prime256v1', size: 256 }.

  • getFinished(): undefined | Buffer<ArrayBufferLike>;

    As the Finished messages are message digests of the complete handshake (with a total of 192 bits for TLS 1.0 and more for SSL 3.0), they can be used for external authentication procedures when the authentication provided by SSL/TLS is not desired or is not enough.

    Corresponds to the SSL_get_finished routine in OpenSSL and may be used to implement the tls-unique channel binding from RFC 5929.

    @returns

    The latest Finished message that has been sent to the socket as part of a SSL/TLS handshake, or undefined if no Finished message has been sent yet.

  • getMaxListeners(): number;

    Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

  • detailed: true

    Returns an object representing the peer's certificate. If the peer does not provide a certificate, an empty object will be returned. If the socket has been destroyed, null will be returned.

    If the full certificate chain was requested, each certificate will include anissuerCertificate property containing an object representing its issuer's certificate.

    @param detailed

    Include the full certificate chain if true, otherwise include just the peer's certificate.

    @returns

    A certificate object.

    detailed?: false

    Returns an object representing the peer's certificate. If the peer does not provide a certificate, an empty object will be returned. If the socket has been destroyed, null will be returned.

    If the full certificate chain was requested, each certificate will include anissuerCertificate property containing an object representing its issuer's certificate.

    @param detailed

    Include the full certificate chain if true, otherwise include just the peer's certificate.

    @returns

    A certificate object.

    detailed?: boolean

    Returns an object representing the peer's certificate. If the peer does not provide a certificate, an empty object will be returned. If the socket has been destroyed, null will be returned.

    If the full certificate chain was requested, each certificate will include anissuerCertificate property containing an object representing its issuer's certificate.

    @param detailed

    Include the full certificate chain if true, otherwise include just the peer's certificate.

    @returns

    A certificate object.

  • getPeerFinished(): undefined | Buffer<ArrayBufferLike>;

    As the Finished messages are message digests of the complete handshake (with a total of 192 bits for TLS 1.0 and more for SSL 3.0), they can be used for external authentication procedures when the authentication provided by SSL/TLS is not desired or is not enough.

    Corresponds to the SSL_get_peer_finished routine in OpenSSL and may be used to implement the tls-unique channel binding from RFC 5929.

    @returns

    The latest Finished message that is expected or has actually been received from the socket as part of a SSL/TLS handshake, or undefined if there is no Finished message so far.

  • Returns the peer certificate as an X509Certificate object.

    If there is no peer certificate, or the socket has been destroyed,undefined will be returned.

  • getProtocol(): null | string;

    Returns a string containing the negotiated SSL/TLS protocol version of the current connection. The value 'unknown' will be returned for connected sockets that have not completed the handshaking process. The value null will be returned for server sockets or disconnected client sockets.

    Protocol versions are:

    • 'SSLv3'
    • 'TLSv1'
    • 'TLSv1.1'
    • 'TLSv1.2'
    • 'TLSv1.3'

    See the OpenSSL SSL_get_version documentation for more information.

  • getSession(): undefined | Buffer<ArrayBufferLike>;

    Returns the TLS session data or undefined if no session was negotiated. On the client, the data can be provided to the session option of connect to resume the connection. On the server, it may be useful for debugging.

    See Session Resumption for more information.

    Note: getSession() works only for TLSv1.2 and below. For TLSv1.3, applications must use the 'session' event (it also works for TLSv1.2 and below).

  • getSharedSigalgs(): string[];
    @returns

    List of signature algorithms shared between the server and the client in the order of decreasing preference.

  • getTLSTicket(): undefined | Buffer<ArrayBufferLike>;

    For a client, returns the TLS session ticket if one is available, orundefined. For a server, always returns undefined.

    It may be useful for debugging.

    See Session Resumption for more information.

  • Returns the local certificate as an X509Certificate object.

    If there is no local certificate, or the socket has been destroyed,undefined will be returned.

  • isPaused(): boolean;

    The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

    const readable = new stream.Readable();
    
    readable.isPaused(); // === false
    readable.pause();
    readable.isPaused(); // === true
    readable.resume();
    readable.isPaused(); // === false
    
  • isSessionReused(): boolean;

    See Session Resumption for more information.

    @returns

    true if the session was reused, false otherwise.

  • options?: { destroyOnReturn: boolean }
    ): AsyncIterator<any>;

    The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

  • eventName: string | symbol,
    listener?: Function
    ): number;

    Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

    @param eventName

    The name of the event being listened for

    @param listener

    The event handler function

  • eventName: string | symbol
    ): Function[];

    Returns a copy of the array of listeners for the event named eventName.

    server.on('connection', (stream) => {
      console.log('someone connected!');
    });
    console.log(util.inspect(server.listeners('connection')));
    // Prints: [ [Function] ]
    
  • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
    options?: ArrayOptions

    This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

    @param fn

    a function to map over every chunk in the stream. Async or not.

    @returns

    a stream mapped with the function fn.

  • off<K>(
    eventName: string | symbol,
    listener: (...args: any[]) => void
    ): this;

    Alias for emitter.removeListener().

  • event: string,
    listener: (...args: any[]) => void
    ): this;

    Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

    server.on('connection', (stream) => {
      console.log('someone connected!');
    });
    

    Returns a reference to the EventEmitter, so that calls can be chained.

    By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

    import { EventEmitter } from 'node:events';
    const myEE = new EventEmitter();
    myEE.on('foo', () => console.log('a'));
    myEE.prependListener('foo', () => console.log('b'));
    myEE.emit('foo');
    // Prints:
    //   b
    //   a
    
    @param listener

    The callback function

    event: 'OCSPResponse',
    listener: (response: Buffer) => void
    ): this;
    event: 'secureConnect',
    listener: () => void
    ): this;
    event: 'session',
    listener: (session: Buffer) => void
    ): this;
    event: 'keylog',
    listener: (line: Buffer) => void
    ): this;
  • event: string,
    listener: (...args: any[]) => void
    ): this;

    Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

    server.once('connection', (stream) => {
      console.log('Ah, we have our first user!');
    });
    

    Returns a reference to the EventEmitter, so that calls can be chained.

    By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

    import { EventEmitter } from 'node:events';
    const myEE = new EventEmitter();
    myEE.once('foo', () => console.log('a'));
    myEE.prependOnceListener('foo', () => console.log('b'));
    myEE.emit('foo');
    // Prints:
    //   b
    //   a
    
    @param listener

    The callback function

    event: 'OCSPResponse',
    listener: (response: Buffer) => void
    ): this;
    event: 'secureConnect',
    listener: () => void
    ): this;
    event: 'session',
    listener: (session: Buffer) => void
    ): this;
    event: 'keylog',
    listener: (line: Buffer) => void
    ): this;
  • pause(): this;

    Pauses the reading of data. That is, 'data' events will not be emitted. Useful to throttle back an upload.

    @returns

    The socket itself.

  • pipe<T extends WritableStream>(
    destination: T,
    options?: { end: boolean }
    ): T;
  • event: string,
    listener: (...args: any[]) => void
    ): this;

    Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

    server.prependListener('connection', (stream) => {
      console.log('someone connected!');
    });
    

    Returns a reference to the EventEmitter, so that calls can be chained.

    @param listener

    The callback function

    event: 'OCSPResponse',
    listener: (response: Buffer) => void
    ): this;
    event: 'secureConnect',
    listener: () => void
    ): this;
    event: 'session',
    listener: (session: Buffer) => void
    ): this;
    event: 'keylog',
    listener: (line: Buffer) => void
    ): this;
  • event: string,
    listener: (...args: any[]) => void
    ): this;

    Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

    server.prependOnceListener('connection', (stream) => {
      console.log('Ah, we have our first user!');
    });
    

    Returns a reference to the EventEmitter, so that calls can be chained.

    @param listener

    The callback function

    event: 'OCSPResponse',
    listener: (response: Buffer) => void
    ): this;
    event: 'secureConnect',
    listener: () => void
    ): this;
    event: 'session',
    listener: (session: Buffer) => void
    ): this;
    event: 'keylog',
    listener: (line: Buffer) => void
    ): this;
  • chunk: any,
    encoding?: BufferEncoding
    ): boolean;
  • eventName: string | symbol
    ): Function[];

    Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

    import { EventEmitter } from 'node:events';
    const emitter = new EventEmitter();
    emitter.once('log', () => console.log('log once'));
    
    // Returns a new Array with a function `onceWrapper` which has a property
    // `listener` which contains the original listener bound above
    const listeners = emitter.rawListeners('log');
    const logFnWrapper = listeners[0];
    
    // Logs "log once" to the console and does not unbind the `once` event
    logFnWrapper.listener();
    
    // Logs "log once" to the console and removes the listener
    logFnWrapper();
    
    emitter.on('log', () => console.log('log persistently'));
    // Will return a new Array with a single function bound by `.on()` above
    const newListeners = emitter.rawListeners('log');
    
    // Logs "log persistently" twice
    newListeners[0]();
    emitter.emit('log');
    
  • size?: number
    ): any;

    The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

    The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

    If the size argument is not specified, all of the data contained in the internal buffer will be returned.

    The size argument must be less than or equal to 1 GiB.

    The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

    const readable = getReadableStreamSomehow();
    
    // 'readable' may be triggered multiple times as data is buffered in
    readable.on('readable', () => {
      let chunk;
      console.log('Stream is readable (new data received in buffer)');
      // Use a loop to make sure we read all currently available data
      while (null !== (chunk = readable.read())) {
        console.log(`Read ${chunk.length} bytes of data...`);
      }
    });
    
    // 'end' will be triggered once when there is no more data available
    readable.on('end', () => {
      console.log('Reached end of stream.');
    });
    

    Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

    Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

    const chunks = [];
    
    readable.on('readable', () => {
      let chunk;
      while (null !== (chunk = readable.read())) {
        chunks.push(chunk);
      }
    });
    
    readable.on('end', () => {
      const content = chunks.join('');
    });
    

    A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

    If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

    Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

    @param size

    Optional argument to specify how much data to read.

  • reduce<T = any>(
    fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
    initial?: undefined,
    options?: Pick<ArrayOptions, 'signal'>
    ): Promise<T>;

    This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

    If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

    The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

    @param fn

    a reducer function to call over every chunk in the stream. Async or not.

    @param initial

    the initial value to use in the reduction.

    @returns

    a promise for the final value of the reduction.

    reduce<T = any>(
    fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
    initial: T,
    options?: Pick<ArrayOptions, 'signal'>
    ): Promise<T>;

    This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

    If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

    The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

    @param fn

    a reducer function to call over every chunk in the stream. Async or not.

    @param initial

    the initial value to use in the reduction.

    @returns

    a promise for the final value of the reduction.

  • ref(): this;

    Opposite of unref(), calling ref() on a previously unrefed socket will not let the program exit if it's the only socket left (the default behavior). If the socket is refed calling ref again will have no effect.

    @returns

    The socket itself.

  • eventName?: string | symbol
    ): this;

    Removes all listeners, or those of the specified eventName.

    It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

    Returns a reference to the EventEmitter, so that calls can be chained.

  • event: 'close',
    listener: () => void
    ): this;

    Removes the specified listener from the listener array for the event named eventName.

    const callback = (stream) => {
      console.log('someone connected!');
    };
    server.on('connection', callback);
    // ...
    server.removeListener('connection', callback);
    

    removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

    Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

    import { EventEmitter } from 'node:events';
    class MyEmitter extends EventEmitter {}
    const myEmitter = new MyEmitter();
    
    const callbackA = () => {
      console.log('A');
      myEmitter.removeListener('event', callbackB);
    };
    
    const callbackB = () => {
      console.log('B');
    };
    
    myEmitter.on('event', callbackA);
    
    myEmitter.on('event', callbackB);
    
    // callbackA removes listener callbackB but it will still be called.
    // Internal listener array at time of emit [callbackA, callbackB]
    myEmitter.emit('event');
    // Prints:
    //   A
    //   B
    
    // callbackB is now removed.
    // Internal listener array [callbackA]
    myEmitter.emit('event');
    // Prints:
    //   A
    

    Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

    When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

    import { EventEmitter } from 'node:events';
    const ee = new EventEmitter();
    
    function pong() {
      console.log('pong');
    }
    
    ee.on('ping', pong);
    ee.once('ping', pong);
    ee.removeListener('ping', pong);
    
    ee.emit('ping');
    ee.emit('ping');
    

    Returns a reference to the EventEmitter, so that calls can be chained.

    event: 'data',
    listener: (chunk: any) => void
    ): this;
    event: 'drain',
    listener: () => void
    ): this;
    event: 'end',
    listener: () => void
    ): this;
    event: 'error',
    listener: (err: Error) => void
    ): this;
    event: 'finish',
    listener: () => void
    ): this;
    event: 'pause',
    listener: () => void
    ): this;
    event: 'pipe',
    listener: (src: Readable) => void
    ): this;
    event: 'readable',
    listener: () => void
    ): this;
    event: 'resume',
    listener: () => void
    ): this;
    event: 'unpipe',
    listener: (src: Readable) => void
    ): this;
    event: string | symbol,
    listener: (...args: any[]) => void
    ): this;
  • options: { rejectUnauthorized: boolean; requestCert: boolean },
    callback: (err: null | Error) => void
    ): undefined | boolean;

    The tlsSocket.renegotiate() method initiates a TLS renegotiation process. Upon completion, the callback function will be passed a single argument that is either an Error (if the request failed) or null.

    This method can be used to request a peer's certificate after the secure connection has been established.

    When running as the server, the socket will be destroyed with an error after handshakeTimeout timeout.

    For TLSv1.3, renegotiation cannot be initiated, it is not supported by the protocol.

    @param callback

    If renegotiate() returned true, callback is attached once to the 'secure' event. If renegotiate() returned false, callback will be called in the next tick with an error, unless the tlsSocket has been destroyed, in which case callback will not be called at all.

    @returns

    true if renegotiation was initiated, false otherwise.

  • Close the TCP connection by sending an RST packet and destroy the stream. If this TCP socket is in connecting status, it will send an RST packet and destroy this TCP socket once it is connected. Otherwise, it will call socket.destroy with an ERR_SOCKET_CLOSED Error. If this is not a TCP socket (for example, a pipe), calling this method will immediately throw an ERR_INVALID_HANDLE_TYPE Error.

  • resume(): this;

    Resumes reading after a call to socket.pause().

    @returns

    The socket itself.

  • encoding: BufferEncoding
    ): this;

    The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

    @param encoding

    The new default encoding

  • encoding?: BufferEncoding
    ): this;

    Set the encoding for the socket as a Readable Stream. See readable.setEncoding() for more information.

    @returns

    The socket itself.

  • enable?: boolean,
    initialDelay?: number
    ): this;

    Enable/disable keep-alive functionality, and optionally set the initial delay before the first keepalive probe is sent on an idle socket.

    Set initialDelay (in milliseconds) to set the delay between the last data packet received and the first keepalive probe. Setting 0 forinitialDelay will leave the value unchanged from the default (or previous) setting.

    Enabling the keep-alive functionality will set the following socket options:

    • SO_KEEPALIVE=1
    • TCP_KEEPIDLE=initialDelay
    • TCP_KEEPCNT=10
    • TCP_KEEPINTVL=1
    @returns

    The socket itself.

  • ): void;

    The tlsSocket.setKeyCert() method sets the private key and certificate to use for the socket. This is mainly useful if you wish to select a server certificate from a TLS server's ALPNCallback.

    @param context

    An object containing at least key and cert properties from the () options, or a TLS context object created with () itself.

  • n: number
    ): this;

    By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

    Returns a reference to the EventEmitter, so that calls can be chained.

  • size?: number
    ): boolean;

    The tlsSocket.setMaxSendFragment() method sets the maximum TLS fragment size. Returns true if setting the limit succeeded; false otherwise.

    Smaller fragment sizes decrease the buffering latency on the client: larger fragments are buffered by the TLS layer until the entire fragment is received and its integrity is verified; large fragments can span multiple roundtrips and their processing can be delayed due to packet loss or reordering. However, smaller fragments add extra TLS framing bytes and CPU overhead, which may decrease overall server throughput.

    @param size

    The maximum TLS fragment size. The maximum value is 16384.

  • noDelay?: boolean
    ): this;

    Enable/disable the use of Nagle's algorithm.

    When a TCP connection is created, it will have Nagle's algorithm enabled.

    Nagle's algorithm delays data before it is sent via the network. It attempts to optimize throughput at the expense of latency.

    Passing true for noDelay or not passing an argument will disable Nagle's algorithm for the socket. Passing false for noDelay will enable Nagle's algorithm.

    @returns

    The socket itself.

  • timeout: number,
    callback?: () => void
    ): this;

    Sets the socket to timeout after timeout milliseconds of inactivity on the socket. By default net.Socket do not have a timeout.

    When an idle timeout is triggered the socket will receive a 'timeout' event but the connection will not be severed. The user must manually call socket.end() or socket.destroy() to end the connection.

    socket.setTimeout(3000);
    socket.on('timeout', () => {
      console.log('socket timeout');
      socket.end();
    });
    

    If timeout is 0, then the existing idle timeout is disabled.

    The optional callback parameter will be added as a one-time listener for the 'timeout' event.

    @returns

    The socket itself.

  • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
    options?: ArrayOptions
    ): Promise<boolean>;

    This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

    @param fn

    a function to call on each chunk of the stream. Async or not.

    @returns

    a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

  • limit: number,
    options?: Pick<ArrayOptions, 'signal'>

    This method returns a new stream with the first limit chunks.

    @param limit

    the number of chunks to take from the readable.

    @returns

    a stream with limit chunks taken.

  • options?: Pick<ArrayOptions, 'signal'>
    ): Promise<any[]>;

    This method allows easily obtaining the contents of a stream.

    As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

    @returns

    a promise containing an array with the contents of the stream.

  • uncork(): void;

    The writable.uncork() method flushes all data buffered since cork was called.

    When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

    stream.cork();
    stream.write('some ');
    stream.write('data ');
    process.nextTick(() => stream.uncork());
    

    If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

    stream.cork();
    stream.write('some ');
    stream.cork();
    stream.write('data ');
    process.nextTick(() => {
      stream.uncork();
      // The data will not be flushed until uncork() is called a second time.
      stream.uncork();
    });
    

    See also: writable.cork().

  • destination?: WritableStream
    ): this;

    The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

    If the destination is not specified, then all pipes are detached.

    If the destination is specified, but no pipe is set up for it, then the method does nothing.

    import fs from 'node:fs';
    const readable = getReadableStreamSomehow();
    const writable = fs.createWriteStream('file.txt');
    // All the data from readable goes into 'file.txt',
    // but only for the first second.
    readable.pipe(writable);
    setTimeout(() => {
      console.log('Stop writing to file.txt.');
      readable.unpipe(writable);
      console.log('Manually close the file stream.');
      writable.end();
    }, 1000);
    
    @param destination

    Optional specific stream to unpipe

  • unref(): this;

    Calling unref() on a socket will allow the program to exit if this is the only active socket in the event system. If the socket is already unrefed callingunref() again will have no effect.

    @returns

    The socket itself.

  • chunk: any,
    encoding?: BufferEncoding
    ): void;

    Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

    The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

    The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

    Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

    // Pull off a header delimited by \n\n.
    // Use unshift() if we get too much.
    // Call the callback with (error, header, stream).
    import { StringDecoder } from 'node:string_decoder';
    function parseHeader(stream, callback) {
      stream.on('error', callback);
      stream.on('readable', onReadable);
      const decoder = new StringDecoder('utf8');
      let header = '';
      function onReadable() {
        let chunk;
        while (null !== (chunk = stream.read())) {
          const str = decoder.write(chunk);
          if (str.includes('\n\n')) {
            // Found the header boundary.
            const split = str.split(/\n\n/);
            header += split.shift();
            const remaining = split.join('\n\n');
            const buf = Buffer.from(remaining, 'utf8');
            stream.removeListener('error', callback);
            // Remove the 'readable' listener before unshifting.
            stream.removeListener('readable', onReadable);
            if (buf.length)
              stream.unshift(buf);
            // Now the body of the message can be read from the stream.
            callback(null, header, stream);
            return;
          }
          // Still reading the header.
          header += str;
        }
      }
    }
    

    Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

    @param chunk

    Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

    @param encoding

    Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

  • stream: ReadableStream
    ): this;

    Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

    When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

    It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

    import { OldReader } from './old-api-module.js';
    import { Readable } from 'node:stream';
    const oreader = new OldReader();
    const myReader = new Readable().wrap(oreader);
    
    myReader.on('readable', () => {
      myReader.read(); // etc.
    });
    
    @param stream

    An "old style" readable stream

  • buffer: string | Uint8Array<ArrayBufferLike>,
    cb?: (err?: Error) => void
    ): boolean;

    Sends data on the socket. The second parameter specifies the encoding in the case of a string. It defaults to UTF8 encoding.

    Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory.'drain' will be emitted when the buffer is again free.

    The optional callback parameter will be executed when the data is finally written out, which may not be immediately.

    See Writable stream write() method for more information.

    str: string | Uint8Array<ArrayBufferLike>,
    encoding?: BufferEncoding,
    cb?: (err?: Error) => void
    ): boolean;

    Sends data on the socket. The second parameter specifies the encoding in the case of a string. It defaults to UTF8 encoding.

    Returns true if the entire data was flushed successfully to the kernel buffer. Returns false if all or part of the data was queued in user memory.'drain' will be emitted when the buffer is again free.

    The optional callback parameter will be executed when the data is finally written out, which may not be immediately.

    See Writable stream write() method for more information.

    @param encoding

    Only used when data is string.

  • signal: AbortSignal,
    resource: (event: Event) => void
    ): Disposable;

    Listens once to the abort event on the provided signal.

    Listening to the abort event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can call e.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.

    This API allows safely using AbortSignals in Node.js APIs by solving these two issues by listening to the event such that stopImmediatePropagation does not prevent the listener from running.

    Returns a disposable so that it may be unsubscribed from more easily.

    import { addAbortListener } from 'node:events';
    
    function example(signal) {
      let disposable;
      try {
        signal.addEventListener('abort', (e) => e.stopImmediatePropagation());
        disposable = addAbortListener(signal, (e) => {
          // Do something when signal is aborted.
        });
      } finally {
        disposable?.[Symbol.dispose]();
      }
    }
    
    @returns

    Disposable that removes the abort listener.

  • static from(
    src: string | Object | Stream | ArrayBuffer | Blob | Iterable<any, any, any> | AsyncIterable<any, any, any> | AsyncGeneratorFunction | Promise<any>
    ): Duplex;

    A utility method for creating duplex streams.

    • Stream converts writable stream into writable Duplex and readable stream to Duplex.
    • Blob converts into readable Duplex.
    • string converts into readable Duplex.
    • ArrayBuffer converts into readable Duplex.
    • AsyncIterable converts into a readable Duplex. Cannot yield null.
    • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
    • AsyncFunction converts into a writable Duplex. Must return either null or undefined
    • Object ({ writable, readable }) converts readable and writable into Stream and then combines them into Duplex where the Duplex will write to the writable and read from the readable.
    • Promise converts into readable Duplex. Value null is ignored.
  • static fromWeb(
    duplexStream: { readable: ReadableStream; writable: WritableStream },
    options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
    ): Duplex;

    A utility method for creating a Duplex from a web ReadableStream and WritableStream.

  • emitter: EventEmitter<DefaultEventMap> | EventTarget,
    name: string | symbol
    ): Function[];

    Returns a copy of the array of listeners for the event named eventName.

    For EventEmitters this behaves exactly the same as calling .listeners on the emitter.

    For EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.

    import { getEventListeners, EventEmitter } from 'node:events';
    
    {
      const ee = new EventEmitter();
      const listener = () => console.log('Events are fun');
      ee.on('foo', listener);
      console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ]
    }
    {
      const et = new EventTarget();
      const listener = () => console.log('Events are fun');
      et.addEventListener('foo', listener);
      console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ]
    }
    
  • emitter: EventEmitter<DefaultEventMap> | EventTarget
    ): number;

    Returns the currently set max amount of listeners.

    For EventEmitters this behaves exactly the same as calling .getMaxListeners on the emitter.

    For EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.

    import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events';
    
    {
      const ee = new EventEmitter();
      console.log(getMaxListeners(ee)); // 10
      setMaxListeners(11, ee);
      console.log(getMaxListeners(ee)); // 11
    }
    {
      const et = new EventTarget();
      console.log(getMaxListeners(et)); // 10
      setMaxListeners(11, et);
      console.log(getMaxListeners(et)); // 11
    }
    
  • static on(
    emitter: EventEmitter,
    eventName: string | symbol,
    options?: StaticEventEmitterIteratorOptions
    ): AsyncIterator<any[]>;
    import { on, EventEmitter } from 'node:events';
    import process from 'node:process';
    
    const ee = new EventEmitter();
    
    // Emit later on
    process.nextTick(() => {
      ee.emit('foo', 'bar');
      ee.emit('foo', 42);
    });
    
    for await (const event of on(ee, 'foo')) {
      // The execution of this inner block is synchronous and it
      // processes one event at a time (even with await). Do not use
      // if concurrent execution is required.
      console.log(event); // prints ['bar'] [42]
    }
    // Unreachable here
    

    Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

    An AbortSignal can be used to cancel waiting on events:

    import { on, EventEmitter } from 'node:events';
    import process from 'node:process';
    
    const ac = new AbortController();
    
    (async () => {
      const ee = new EventEmitter();
    
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
    
      for await (const event of on(ee, 'foo', { signal: ac.signal })) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
    })();
    
    process.nextTick(() => ac.abort());
    

    Use the close option to specify an array of event names that will end the iteration:

    import { on, EventEmitter } from 'node:events';
    import process from 'node:process';
    
    const ee = new EventEmitter();
    
    // Emit later on
    process.nextTick(() => {
      ee.emit('foo', 'bar');
      ee.emit('foo', 42);
      ee.emit('close');
    });
    
    for await (const event of on(ee, 'foo', { close: ['close'] })) {
      console.log(event); // prints ['bar'] [42]
    }
    // the loop will exit after 'close' is emitted
    console.log('done'); // prints 'done'
    
    @returns

    An AsyncIterator that iterates eventName events emitted by the emitter

    static on(
    emitter: EventTarget,
    eventName: string,
    options?: StaticEventEmitterIteratorOptions
    ): AsyncIterator<any[]>;
    import { on, EventEmitter } from 'node:events';
    import process from 'node:process';
    
    const ee = new EventEmitter();
    
    // Emit later on
    process.nextTick(() => {
      ee.emit('foo', 'bar');
      ee.emit('foo', 42);
    });
    
    for await (const event of on(ee, 'foo')) {
      // The execution of this inner block is synchronous and it
      // processes one event at a time (even with await). Do not use
      // if concurrent execution is required.
      console.log(event); // prints ['bar'] [42]
    }
    // Unreachable here
    

    Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

    An AbortSignal can be used to cancel waiting on events:

    import { on, EventEmitter } from 'node:events';
    import process from 'node:process';
    
    const ac = new AbortController();
    
    (async () => {
      const ee = new EventEmitter();
    
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
    
      for await (const event of on(ee, 'foo', { signal: ac.signal })) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
    })();
    
    process.nextTick(() => ac.abort());
    

    Use the close option to specify an array of event names that will end the iteration:

    import { on, EventEmitter } from 'node:events';
    import process from 'node:process';
    
    const ee = new EventEmitter();
    
    // Emit later on
    process.nextTick(() => {
      ee.emit('foo', 'bar');
      ee.emit('foo', 42);
      ee.emit('close');
    });
    
    for await (const event of on(ee, 'foo', { close: ['close'] })) {
      console.log(event); // prints ['bar'] [42]
    }
    // the loop will exit after 'close' is emitted
    console.log('done'); // prints 'done'
    
    @returns

    An AsyncIterator that iterates eventName events emitted by the emitter

  • static once(
    emitter: EventEmitter,
    eventName: string | symbol,
    options?: StaticEventEmitterOptions
    ): Promise<any[]>;

    Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

    This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

    import { once, EventEmitter } from 'node:events';
    import process from 'node:process';
    
    const ee = new EventEmitter();
    
    process.nextTick(() => {
      ee.emit('myevent', 42);
    });
    
    const [value] = await once(ee, 'myevent');
    console.log(value);
    
    const err = new Error('kaboom');
    process.nextTick(() => {
      ee.emit('error', err);
    });
    
    try {
      await once(ee, 'myevent');
    } catch (err) {
      console.error('error happened', err);
    }
    

    The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

    import { EventEmitter, once } from 'node:events';
    
    const ee = new EventEmitter();
    
    once(ee, 'error')
      .then(([err]) => console.log('ok', err.message))
      .catch((err) => console.error('error', err.message));
    
    ee.emit('error', new Error('boom'));
    
    // Prints: ok boom
    

    An AbortSignal can be used to cancel waiting for the event:

    import { EventEmitter, once } from 'node:events';
    
    const ee = new EventEmitter();
    const ac = new AbortController();
    
    async function foo(emitter, event, signal) {
      try {
        await once(emitter, event, { signal });
        console.log('event emitted!');
      } catch (error) {
        if (error.name === 'AbortError') {
          console.error('Waiting for the event was canceled!');
        } else {
          console.error('There was an error', error.message);
        }
      }
    }
    
    foo(ee, 'foo', ac.signal);
    ac.abort(); // Abort waiting for the event
    ee.emit('foo'); // Prints: Waiting for the event was canceled!
    
    static once(
    emitter: EventTarget,
    eventName: string,
    options?: StaticEventEmitterOptions
    ): Promise<any[]>;

    Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

    This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

    import { once, EventEmitter } from 'node:events';
    import process from 'node:process';
    
    const ee = new EventEmitter();
    
    process.nextTick(() => {
      ee.emit('myevent', 42);
    });
    
    const [value] = await once(ee, 'myevent');
    console.log(value);
    
    const err = new Error('kaboom');
    process.nextTick(() => {
      ee.emit('error', err);
    });
    
    try {
      await once(ee, 'myevent');
    } catch (err) {
      console.error('error happened', err);
    }
    

    The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

    import { EventEmitter, once } from 'node:events';
    
    const ee = new EventEmitter();
    
    once(ee, 'error')
      .then(([err]) => console.log('ok', err.message))
      .catch((err) => console.error('error', err.message));
    
    ee.emit('error', new Error('boom'));
    
    // Prints: ok boom
    

    An AbortSignal can be used to cancel waiting for the event:

    import { EventEmitter, once } from 'node:events';
    
    const ee = new EventEmitter();
    const ac = new AbortController();
    
    async function foo(emitter, event, signal) {
      try {
        await once(emitter, event, { signal });
        console.log('event emitted!');
      } catch (error) {
        if (error.name === 'AbortError') {
          console.error('Waiting for the event was canceled!');
        } else {
          console.error('There was an error', error.message);
        }
      }
    }
    
    foo(ee, 'foo', ac.signal);
    ac.abort(); // Abort waiting for the event
    ee.emit('foo'); // Prints: Waiting for the event was canceled!
    
  • n?: number,
    ...eventTargets: EventEmitter<DefaultEventMap> | EventTarget[]
    ): void;
    import { setMaxListeners, EventEmitter } from 'node:events';
    
    const target = new EventTarget();
    const emitter = new EventEmitter();
    
    setMaxListeners(5, target, emitter);
    
    @param n

    A non-negative number. The maximum number of listeners per EventTarget event.

    @param eventTargets

    Zero or more {EventTarget} or {EventEmitter} instances. If none are specified, n is set as the default max for all newly created {EventTarget} and {EventEmitter} objects.

  • static toWeb(
    streamDuplex: Duplex
    ): { readable: ReadableStream; writable: WritableStream };

    A utility method for creating a web ReadableStream and WritableStream from a Duplex.