Bun

Node.js module

crypto

The 'node:crypto' module provides cryptographic functionality, including wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, verify, and key derivation functions.

It supports common algorithms such as SHA-256, AES, RSA, ECDH, and more. The module also offers secure random number generation, key management, and certificate handling, making it essential for implementing secure protocols and data encryption.

Works in Bun

Most crypto functionality is implemented, but some specific methods related to engine configuration, FIPS mode, and secure heap usage are missing.

  • namespace constants

  • class Cipher

    Instances of the Cipher class are used to encrypt data. The class can be used in one of two ways:

    • As a stream that is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or
    • Using the cipher.update() and cipher.final() methods to produce the encrypted data.

    The createCipheriv method is used to create Cipher instances. Cipher objects are not to be created directly using the new keyword.

    Example: Using Cipher objects as streams:

    const {
      scrypt,
      randomFill,
      createCipheriv,
    } = await import('node:crypto');
    
    const algorithm = 'aes-192-cbc';
    const password = 'Password used to generate key';
    
    // First, we'll generate the key. The key length is dependent on the algorithm.
    // In this case for aes192, it is 24 bytes (192 bits).
    scrypt(password, 'salt', 24, (err, key) => {
      if (err) throw err;
      // Then, we'll generate a random initialization vector
      randomFill(new Uint8Array(16), (err, iv) => {
        if (err) throw err;
    
        // Once we have the key and iv, we can create and use the cipher...
        const cipher = createCipheriv(algorithm, key, iv);
    
        let encrypted = '';
        cipher.setEncoding('hex');
    
        cipher.on('data', (chunk) => encrypted += chunk);
        cipher.on('end', () => console.log(encrypted));
    
        cipher.write('some clear text data');
        cipher.end();
      });
    });
    

    Example: Using Cipher and piped streams:

    import {
      createReadStream,
      createWriteStream,
    } from 'node:fs';
    
    import {
      pipeline,
    } from 'node:stream';
    
    const {
      scrypt,
      randomFill,
      createCipheriv,
    } = await import('node:crypto');
    
    const algorithm = 'aes-192-cbc';
    const password = 'Password used to generate key';
    
    // First, we'll generate the key. The key length is dependent on the algorithm.
    // In this case for aes192, it is 24 bytes (192 bits).
    scrypt(password, 'salt', 24, (err, key) => {
      if (err) throw err;
      // Then, we'll generate a random initialization vector
      randomFill(new Uint8Array(16), (err, iv) => {
        if (err) throw err;
    
        const cipher = createCipheriv(algorithm, key, iv);
    
        const input = createReadStream('test.js');
        const output = createWriteStream('test.enc');
    
        pipeline(input, cipher, output, (err) => {
          if (err) throw err;
        });
      });
    });
    

    Example: Using the cipher.update() and cipher.final() methods:

    const {
      scrypt,
      randomFill,
      createCipheriv,
    } = await import('node:crypto');
    
    const algorithm = 'aes-192-cbc';
    const password = 'Password used to generate key';
    
    // First, we'll generate the key. The key length is dependent on the algorithm.
    // In this case for aes192, it is 24 bytes (192 bits).
    scrypt(password, 'salt', 24, (err, key) => {
      if (err) throw err;
      // Then, we'll generate a random initialization vector
      randomFill(new Uint8Array(16), (err, iv) => {
        if (err) throw err;
    
        const cipher = createCipheriv(algorithm, key, iv);
    
        let encrypted = cipher.update('some clear text data', 'utf8', 'hex');
        encrypted += cipher.final('hex');
        console.log(encrypted);
      });
    });
    
    • allowHalfOpen: boolean

      If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

      This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

    • readonly closed: boolean

      Is true after 'close' has been emitted.

    • destroyed: boolean

      Is true after readable.destroy() has been called.

    • readonly errored: null | Error

      Returns error if the stream has been destroyed with an error.

    • readable: boolean

      Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

    • readonly readableAborted: boolean

      Returns whether the stream was destroyed or errored before emitting 'end'.

    • readonly readableDidRead: boolean

      Returns whether 'data' has been emitted.

    • readonly readableEncoding: null | BufferEncoding

      Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

    • readonly readableEnded: boolean

      Becomes true when 'end' event is emitted.

    • readonly readableFlowing: null | boolean

      This property reflects the current state of a Readable stream as described in the Three states section.

    • readonly readableHighWaterMark: number

      Returns the value of highWaterMark passed when creating this Readable.

    • readonly readableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

    • readonly readableObjectMode: boolean

      Getter for the property objectMode of a given Readable stream.

    • readonly writable: boolean

      Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

    • readonly writableCorked: number

      Number of times writable.uncork() needs to be called in order to fully uncork the stream.

    • readonly writableEnded: boolean

      Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

    • readonly writableFinished: boolean

      Is set to true immediately before the 'finish' event is emitted.

    • readonly writableHighWaterMark: number

      Return the value of highWaterMark passed when creating this Writable.

    • readonly writableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

    • readonly writableNeedDrain: boolean

      Is true if the stream's buffer has been full and stream will emit 'drain'.

    • readonly writableObjectMode: boolean

      Getter for the property objectMode of a given Writable stream.

    • static captureRejections: boolean

      Value: boolean

      Change the default captureRejections option on all new EventEmitter objects.

    • readonly static captureRejectionSymbol: typeof captureRejectionSymbol

      Value: Symbol.for('nodejs.rejection')

      See how to write a custom rejection handler.

    • static defaultMaxListeners: number

      By default, a maximum of 10 listeners can be registered for any single event. This limit can be changed for individual EventEmitter instances using the emitter.setMaxListeners(n) method. To change the default for allEventEmitter instances, the events.defaultMaxListeners property can be used. If this value is not a positive number, a RangeError is thrown.

      Take caution when setting the events.defaultMaxListeners because the change affects all EventEmitter instances, including those created before the change is made. However, calling emitter.setMaxListeners(n) still has precedence over events.defaultMaxListeners.

      This is not a hard limit. The EventEmitter instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any single EventEmitter, the emitter.getMaxListeners() and emitter.setMaxListeners() methods can be used to temporarily avoid this warning:

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.setMaxListeners(emitter.getMaxListeners() + 1);
      emitter.once('event', () => {
        // do stuff
        emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
      });
      

      The --trace-warnings command-line flag can be used to display the stack trace for such warnings.

      The emitted warning can be inspected with process.on('warning') and will have the additional emitter, type, and count properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Its name property is set to 'MaxListenersExceededWarning'.

    • readonly static errorMonitor: typeof errorMonitor

      This symbol shall be used to install a listener for only monitoring 'error' events. Listeners installed using this symbol are called before the regular 'error' listeners are called.

      Installing a listener using this symbol does not change the behavior once an 'error' event is emitted. Therefore, the process will still crash if no regular 'error' listener is installed.

    • callback: (error?: null | Error) => void
      ): void;
    • error: null | Error,
      callback: (error?: null | Error) => void
      ): void;
    • callback: (error?: null | Error) => void
      ): void;
    • ): void;
    • size: number
      ): void;
    • chunk: any,
      encoding: BufferEncoding,
      ): void;
    • chunk: any,
      encoding: BufferEncoding,
      callback: (error?: null | Error) => void
      ): void;
    • chunks: { chunk: any; encoding: BufferEncoding }[],
      callback: (error?: null | Error) => void
      ): void;
    • [Symbol.asyncDispose](): Promise<void>;

      Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

    • [Symbol.asyncIterator](): AsyncIterator<any>;
    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'close',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'data',
      listener: (chunk: any) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'drain',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'end',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'error',
      listener: (err: Error) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'finish',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'pause',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'readable',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'resume',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
    • options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

      @returns

      a stream of indexed pairs.

    • compose<T extends ReadableStream>(
      stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
      options?: { signal: AbortSignal }
      ): T;
    • cork(): void;

      The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

      The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

      See also: writable.uncork(), writable._writev().

    • error?: Error
      ): this;

      Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

      Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

      Implementors should not override this method, but instead implement readable._destroy().

      @param error

      Error which will be passed as payload in 'error' event

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks dropped from the start.

      @param limit

      the number of chunks to drop from the readable.

      @returns

      a stream with limit chunks dropped from the start.

    • event: 'close'
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'data',
      chunk: any
      ): boolean;
      event: 'drain'
      ): boolean;
      event: 'end'
      ): boolean;
      event: 'error',
      err: Error
      ): boolean;
      event: 'finish'
      ): boolean;
      event: 'pause'
      ): boolean;
      event: 'pipe',
      ): boolean;
      event: 'readable'
      ): boolean;
      event: 'resume'
      ): boolean;
      event: 'unpipe',
      ): boolean;
      event: string | symbol,
      ...args: any[]
      ): boolean;
    • cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      chunk: any,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      chunk: any,
      encoding: BufferEncoding,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding if chunk is a string

    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for every one of the chunks.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions

      This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

      @param fn

      a function to filter chunks from the stream. Async or not.

      @returns

      a stream filtered with the predicate fn.

    • Once the cipher.final() method has been called, the Cipher object can no longer be used to encrypt data. Attempts to call cipher.final() more than once will result in an error being thrown.

      @returns

      Any remaining enciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

      outputEncoding: BufferEncoding
      ): string;

      Once the cipher.final() method has been called, the Cipher object can no longer be used to encrypt data. Attempts to call cipher.final() more than once will result in an error being thrown.

      @param outputEncoding

      The encoding of the return value.

      @returns

      Any remaining enciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

    • find<T>(
      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
      options?: ArrayOptions
      ): Promise<undefined | T>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<any>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

      It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

      @param fn

      a function to map over every chunk in the stream. May be async. May be a stream or generator.

      @returns

      a stream flat-mapped with the function fn.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
      options?: ArrayOptions
      ): Promise<void>;

      This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

      This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

      This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise for when the stream has finished.

    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • isPaused(): boolean;

      The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

      const readable = new stream.Readable();
      
      readable.isPaused(); // === false
      readable.pause();
      readable.isPaused(); // === true
      readable.resume();
      readable.isPaused(); // === false
      
    • options?: { destroyOnReturn: boolean }
      ): AsyncIterator<any>;

      The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

      @param fn

      a function to map over every chunk in the stream. Async or not.

      @returns

      a stream mapped with the function fn.

    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • pause(): this;

      The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

      const readable = getReadableStreamSomehow();
      readable.on('data', (chunk) => {
        console.log(`Received ${chunk.length} bytes of data.`);
        readable.pause();
        console.log('There will be no additional data for 1 second.');
        setTimeout(() => {
          console.log('Now data will start flowing again.');
          readable.resume();
        }, 1000);
      });
      

      The readable.pause() method has no effect if there is a 'readable' event listener.

    • pipe<T extends WritableStream>(
      destination: T,
      options?: { end: boolean }
      ): T;
    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • chunk: any,
      encoding?: BufferEncoding
      ): boolean;
    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • size?: number
      ): any;

      The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

      The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

      If the size argument is not specified, all of the data contained in the internal buffer will be returned.

      The size argument must be less than or equal to 1 GiB.

      The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

      const readable = getReadableStreamSomehow();
      
      // 'readable' may be triggered multiple times as data is buffered in
      readable.on('readable', () => {
        let chunk;
        console.log('Stream is readable (new data received in buffer)');
        // Use a loop to make sure we read all currently available data
        while (null !== (chunk = readable.read())) {
          console.log(`Read ${chunk.length} bytes of data...`);
        }
      });
      
      // 'end' will be triggered once when there is no more data available
      readable.on('end', () => {
        console.log('Reached end of stream.');
      });
      

      Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

      Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

      const chunks = [];
      
      readable.on('readable', () => {
        let chunk;
        while (null !== (chunk = readable.read())) {
          chunks.push(chunk);
        }
      });
      
      readable.on('end', () => {
        const content = chunks.join('');
      });
      

      A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

      If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

      Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

      @param size

      Optional argument to specify how much data to read.

    • reduce<T = any>(
      fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial?: undefined,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

      reduce<T = any>(
      fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial: T,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • event: 'close',
      listener: () => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • resume(): this;

      The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

      The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

      getReadableStreamSomehow()
        .resume()
        .on('end', () => {
          console.log('Reached the end, but did not read anything.');
        });
      

      The readable.resume() method has no effect if there is a 'readable' event listener.

    • autoPadding?: boolean
      ): this;

      When using block encryption algorithms, the Cipher class will automatically add padding to the input data to the appropriate block size. To disable the default padding call cipher.setAutoPadding(false).

      When autoPadding is false, the length of the entire input data must be a multiple of the cipher's block size or cipher.final() will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using 0x0 instead of PKCS padding.

      The cipher.setAutoPadding() method must be called before cipher.final().

      @returns

      for method chaining.

    • encoding: BufferEncoding
      ): this;

      The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

      @param encoding

      The new default encoding

    • encoding: BufferEncoding
      ): this;

      The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

      By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

      The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

      const readable = getReadableStreamSomehow();
      readable.setEncoding('utf8');
      readable.on('data', (chunk) => {
        assert.equal(typeof chunk, 'string');
        console.log('Got %d characters of string data:', chunk.length);
      });
      
      @param encoding

      The encoding to use.

    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks.

      @param limit

      the number of chunks to take from the readable.

      @returns

      a stream with limit chunks taken.

    • options?: Pick<ArrayOptions, 'signal'>
      ): Promise<any[]>;

      This method allows easily obtaining the contents of a stream.

      As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

      @returns

      a promise containing an array with the contents of the stream.

    • uncork(): void;

      The writable.uncork() method flushes all data buffered since cork was called.

      When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

      stream.cork();
      stream.write('some ');
      stream.write('data ');
      process.nextTick(() => stream.uncork());
      

      If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

      stream.cork();
      stream.write('some ');
      stream.cork();
      stream.write('data ');
      process.nextTick(() => {
        stream.uncork();
        // The data will not be flushed until uncork() is called a second time.
        stream.uncork();
      });
      

      See also: writable.cork().

    • destination?: WritableStream
      ): this;

      The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

      If the destination is not specified, then all pipes are detached.

      If the destination is specified, but no pipe is set up for it, then the method does nothing.

      import fs from 'node:fs';
      const readable = getReadableStreamSomehow();
      const writable = fs.createWriteStream('file.txt');
      // All the data from readable goes into 'file.txt',
      // but only for the first second.
      readable.pipe(writable);
      setTimeout(() => {
        console.log('Stop writing to file.txt.');
        readable.unpipe(writable);
        console.log('Manually close the file stream.');
        writable.end();
      }, 1000);
      
      @param destination

      Optional specific stream to unpipe

    • chunk: any,
      encoding?: BufferEncoding
      ): void;

      Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

      The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

      The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

      Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

      // Pull off a header delimited by \n\n.
      // Use unshift() if we get too much.
      // Call the callback with (error, header, stream).
      import { StringDecoder } from 'node:string_decoder';
      function parseHeader(stream, callback) {
        stream.on('error', callback);
        stream.on('readable', onReadable);
        const decoder = new StringDecoder('utf8');
        let header = '';
        function onReadable() {
          let chunk;
          while (null !== (chunk = stream.read())) {
            const str = decoder.write(chunk);
            if (str.includes('\n\n')) {
              // Found the header boundary.
              const split = str.split(/\n\n/);
              header += split.shift();
              const remaining = split.join('\n\n');
              const buf = Buffer.from(remaining, 'utf8');
              stream.removeListener('error', callback);
              // Remove the 'readable' listener before unshifting.
              stream.removeListener('readable', onReadable);
              if (buf.length)
                stream.unshift(buf);
              // Now the body of the message can be read from the stream.
              callback(null, header, stream);
              return;
            }
            // Still reading the header.
            header += str;
          }
        }
      }
      

      Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

      @param chunk

      Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

      @param encoding

      Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

    • ): Buffer;

      Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

      The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

      The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

      data: string,
      inputEncoding: Encoding
      ): Buffer;

      Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

      The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

      The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

      @param inputEncoding

      The encoding of the data.

      data: ArrayBufferView,
      inputEncoding: undefined,
      outputEncoding: Encoding
      ): string;

      Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

      The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

      The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

      @param inputEncoding

      The encoding of the data.

      @param outputEncoding

      The encoding of the return value.

      data: string,
      inputEncoding: undefined | Encoding,
      outputEncoding: Encoding
      ): string;

      Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

      The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

      The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

      @param inputEncoding

      The encoding of the data.

      @param outputEncoding

      The encoding of the return value.

    • stream: ReadableStream
      ): this;

      Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

      When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

      It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

      import { OldReader } from './old-api-module.js';
      import { Readable } from 'node:stream';
      const oreader = new OldReader();
      const myReader = new Readable().wrap(oreader);
      
      myReader.on('readable', () => {
        myReader.read(); // etc.
      });
      
      @param stream

      An "old style" readable stream

    • chunk: any,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

      chunk: any,
      encoding: BufferEncoding,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding, if chunk is a string.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • signal: AbortSignal,
      resource: (event: Event) => void
      ): Disposable;

      Listens once to the abort event on the provided signal.

      Listening to the abort event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can call e.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.

      This API allows safely using AbortSignals in Node.js APIs by solving these two issues by listening to the event such that stopImmediatePropagation does not prevent the listener from running.

      Returns a disposable so that it may be unsubscribed from more easily.

      import { addAbortListener } from 'node:events';
      
      function example(signal) {
        let disposable;
        try {
          signal.addEventListener('abort', (e) => e.stopImmediatePropagation());
          disposable = addAbortListener(signal, (e) => {
            // Do something when signal is aborted.
          });
        } finally {
          disposable?.[Symbol.dispose]();
        }
      }
      
      @returns

      Disposable that removes the abort listener.

    • static from(
      src: string | Object | Stream | ArrayBuffer | Blob | Iterable<any, any, any> | AsyncIterable<any, any, any> | AsyncGeneratorFunction | Promise<any>
      ): Duplex;

      A utility method for creating duplex streams.

      • Stream converts writable stream into writable Duplex and readable stream to Duplex.
      • Blob converts into readable Duplex.
      • string converts into readable Duplex.
      • ArrayBuffer converts into readable Duplex.
      • AsyncIterable converts into a readable Duplex. Cannot yield null.
      • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
      • AsyncFunction converts into a writable Duplex. Must return either null or undefined
      • Object ({ writable, readable }) converts readable and writable into Stream and then combines them into Duplex where the Duplex will write to the writable and read from the readable.
      • Promise converts into readable Duplex. Value null is ignored.
    • static fromWeb(
      duplexStream: { readable: ReadableStream; writable: WritableStream },
      options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
      ): Duplex;

      A utility method for creating a Duplex from a web ReadableStream and WritableStream.

    • emitter: EventEmitter<DefaultEventMap> | EventTarget,
      name: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      For EventEmitters this behaves exactly the same as calling .listeners on the emitter.

      For EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.

      import { getEventListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        const listener = () => console.log('Events are fun');
        ee.on('foo', listener);
        console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ]
      }
      {
        const et = new EventTarget();
        const listener = () => console.log('Events are fun');
        et.addEventListener('foo', listener);
        console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ]
      }
      
    • emitter: EventEmitter<DefaultEventMap> | EventTarget
      ): number;

      Returns the currently set max amount of listeners.

      For EventEmitters this behaves exactly the same as calling .getMaxListeners on the emitter.

      For EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.

      import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        console.log(getMaxListeners(ee)); // 10
        setMaxListeners(11, ee);
        console.log(getMaxListeners(ee)); // 11
      }
      {
        const et = new EventTarget();
        console.log(getMaxListeners(et)); // 10
        setMaxListeners(11, et);
        console.log(getMaxListeners(et)); // 11
      }
      
    • static on(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

      static on(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

    • static once(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
      static once(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
    • n?: number,
      ...eventTargets: EventEmitter<DefaultEventMap> | EventTarget[]
      ): void;
      import { setMaxListeners, EventEmitter } from 'node:events';
      
      const target = new EventTarget();
      const emitter = new EventEmitter();
      
      setMaxListeners(5, target, emitter);
      
      @param n

      A non-negative number. The maximum number of listeners per EventTarget event.

      @param eventTargets

      Zero or more {EventTarget} or {EventEmitter} instances. If none are specified, n is set as the default max for all newly created {EventTarget} and {EventEmitter} objects.

    • static toWeb(
      streamDuplex: Duplex
      ): { readable: ReadableStream; writable: WritableStream };

      A utility method for creating a web ReadableStream and WritableStream from a Duplex.

  • class Decipher

    Instances of the Decipher class are used to decrypt data. The class can be used in one of two ways:

    • As a stream that is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or
    • Using the decipher.update() and decipher.final() methods to produce the unencrypted data.

    The createDecipheriv method is used to create Decipher instances. Decipher objects are not to be created directly using the new keyword.

    Example: Using Decipher objects as streams:

    import { Buffer } from 'node:buffer';
    const {
      scryptSync,
      createDecipheriv,
    } = await import('node:crypto');
    
    const algorithm = 'aes-192-cbc';
    const password = 'Password used to generate key';
    // Key length is dependent on the algorithm. In this case for aes192, it is
    // 24 bytes (192 bits).
    // Use the async `crypto.scrypt()` instead.
    const key = scryptSync(password, 'salt', 24);
    // The IV is usually passed along with the ciphertext.
    const iv = Buffer.alloc(16, 0); // Initialization vector.
    
    const decipher = createDecipheriv(algorithm, key, iv);
    
    let decrypted = '';
    decipher.on('readable', () => {
      let chunk;
      while (null !== (chunk = decipher.read())) {
        decrypted += chunk.toString('utf8');
      }
    });
    decipher.on('end', () => {
      console.log(decrypted);
      // Prints: some clear text data
    });
    
    // Encrypted with same algorithm, key and iv.
    const encrypted =
      'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';
    decipher.write(encrypted, 'hex');
    decipher.end();
    

    Example: Using Decipher and piped streams:

    import {
      createReadStream,
      createWriteStream,
    } from 'node:fs';
    import { Buffer } from 'node:buffer';
    const {
      scryptSync,
      createDecipheriv,
    } = await import('node:crypto');
    
    const algorithm = 'aes-192-cbc';
    const password = 'Password used to generate key';
    // Use the async `crypto.scrypt()` instead.
    const key = scryptSync(password, 'salt', 24);
    // The IV is usually passed along with the ciphertext.
    const iv = Buffer.alloc(16, 0); // Initialization vector.
    
    const decipher = createDecipheriv(algorithm, key, iv);
    
    const input = createReadStream('test.enc');
    const output = createWriteStream('test.js');
    
    input.pipe(decipher).pipe(output);
    

    Example: Using the decipher.update() and decipher.final() methods:

    import { Buffer } from 'node:buffer';
    const {
      scryptSync,
      createDecipheriv,
    } = await import('node:crypto');
    
    const algorithm = 'aes-192-cbc';
    const password = 'Password used to generate key';
    // Use the async `crypto.scrypt()` instead.
    const key = scryptSync(password, 'salt', 24);
    // The IV is usually passed along with the ciphertext.
    const iv = Buffer.alloc(16, 0); // Initialization vector.
    
    const decipher = createDecipheriv(algorithm, key, iv);
    
    // Encrypted using same algorithm, key and iv.
    const encrypted =
      'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';
    let decrypted = decipher.update(encrypted, 'hex', 'utf8');
    decrypted += decipher.final('utf8');
    console.log(decrypted);
    // Prints: some clear text data
    
    • allowHalfOpen: boolean

      If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

      This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

    • readonly closed: boolean

      Is true after 'close' has been emitted.

    • destroyed: boolean

      Is true after readable.destroy() has been called.

    • readonly errored: null | Error

      Returns error if the stream has been destroyed with an error.

    • readable: boolean

      Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

    • readonly readableAborted: boolean

      Returns whether the stream was destroyed or errored before emitting 'end'.

    • readonly readableDidRead: boolean

      Returns whether 'data' has been emitted.

    • readonly readableEncoding: null | BufferEncoding

      Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

    • readonly readableEnded: boolean

      Becomes true when 'end' event is emitted.

    • readonly readableFlowing: null | boolean

      This property reflects the current state of a Readable stream as described in the Three states section.

    • readonly readableHighWaterMark: number

      Returns the value of highWaterMark passed when creating this Readable.

    • readonly readableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

    • readonly readableObjectMode: boolean

      Getter for the property objectMode of a given Readable stream.

    • readonly writable: boolean

      Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

    • readonly writableCorked: number

      Number of times writable.uncork() needs to be called in order to fully uncork the stream.

    • readonly writableEnded: boolean

      Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

    • readonly writableFinished: boolean

      Is set to true immediately before the 'finish' event is emitted.

    • readonly writableHighWaterMark: number

      Return the value of highWaterMark passed when creating this Writable.

    • readonly writableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

    • readonly writableNeedDrain: boolean

      Is true if the stream's buffer has been full and stream will emit 'drain'.

    • readonly writableObjectMode: boolean

      Getter for the property objectMode of a given Writable stream.

    • static captureRejections: boolean

      Value: boolean

      Change the default captureRejections option on all new EventEmitter objects.

    • readonly static captureRejectionSymbol: typeof captureRejectionSymbol

      Value: Symbol.for('nodejs.rejection')

      See how to write a custom rejection handler.

    • static defaultMaxListeners: number

      By default, a maximum of 10 listeners can be registered for any single event. This limit can be changed for individual EventEmitter instances using the emitter.setMaxListeners(n) method. To change the default for allEventEmitter instances, the events.defaultMaxListeners property can be used. If this value is not a positive number, a RangeError is thrown.

      Take caution when setting the events.defaultMaxListeners because the change affects all EventEmitter instances, including those created before the change is made. However, calling emitter.setMaxListeners(n) still has precedence over events.defaultMaxListeners.

      This is not a hard limit. The EventEmitter instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any single EventEmitter, the emitter.getMaxListeners() and emitter.setMaxListeners() methods can be used to temporarily avoid this warning:

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.setMaxListeners(emitter.getMaxListeners() + 1);
      emitter.once('event', () => {
        // do stuff
        emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
      });
      

      The --trace-warnings command-line flag can be used to display the stack trace for such warnings.

      The emitted warning can be inspected with process.on('warning') and will have the additional emitter, type, and count properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Its name property is set to 'MaxListenersExceededWarning'.

    • readonly static errorMonitor: typeof errorMonitor

      This symbol shall be used to install a listener for only monitoring 'error' events. Listeners installed using this symbol are called before the regular 'error' listeners are called.

      Installing a listener using this symbol does not change the behavior once an 'error' event is emitted. Therefore, the process will still crash if no regular 'error' listener is installed.

    • callback: (error?: null | Error) => void
      ): void;
    • error: null | Error,
      callback: (error?: null | Error) => void
      ): void;
    • callback: (error?: null | Error) => void
      ): void;
    • ): void;
    • size: number
      ): void;
    • chunk: any,
      encoding: BufferEncoding,
      ): void;
    • chunk: any,
      encoding: BufferEncoding,
      callback: (error?: null | Error) => void
      ): void;
    • chunks: { chunk: any; encoding: BufferEncoding }[],
      callback: (error?: null | Error) => void
      ): void;
    • [Symbol.asyncDispose](): Promise<void>;

      Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

    • [Symbol.asyncIterator](): AsyncIterator<any>;
    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'close',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'data',
      listener: (chunk: any) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'drain',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'end',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'error',
      listener: (err: Error) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'finish',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'pause',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'readable',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'resume',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
    • options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

      @returns

      a stream of indexed pairs.

    • compose<T extends ReadableStream>(
      stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
      options?: { signal: AbortSignal }
      ): T;
    • cork(): void;

      The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

      The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

      See also: writable.uncork(), writable._writev().

    • error?: Error
      ): this;

      Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

      Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

      Implementors should not override this method, but instead implement readable._destroy().

      @param error

      Error which will be passed as payload in 'error' event

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks dropped from the start.

      @param limit

      the number of chunks to drop from the readable.

      @returns

      a stream with limit chunks dropped from the start.

    • event: 'close'
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'data',
      chunk: any
      ): boolean;
      event: 'drain'
      ): boolean;
      event: 'end'
      ): boolean;
      event: 'error',
      err: Error
      ): boolean;
      event: 'finish'
      ): boolean;
      event: 'pause'
      ): boolean;
      event: 'pipe',
      ): boolean;
      event: 'readable'
      ): boolean;
      event: 'resume'
      ): boolean;
      event: 'unpipe',
      ): boolean;
      event: string | symbol,
      ...args: any[]
      ): boolean;
    • cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      chunk: any,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      chunk: any,
      encoding: BufferEncoding,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding if chunk is a string

    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for every one of the chunks.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions

      This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

      @param fn

      a function to filter chunks from the stream. Async or not.

      @returns

      a stream filtered with the predicate fn.

    • Once the decipher.final() method has been called, the Decipher object can no longer be used to decrypt data. Attempts to call decipher.final() more than once will result in an error being thrown.

      @returns

      Any remaining deciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

      outputEncoding: BufferEncoding
      ): string;

      Once the decipher.final() method has been called, the Decipher object can no longer be used to decrypt data. Attempts to call decipher.final() more than once will result in an error being thrown.

      @param outputEncoding

      The encoding of the return value.

      @returns

      Any remaining deciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

    • find<T>(
      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
      options?: ArrayOptions
      ): Promise<undefined | T>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<any>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

      It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

      @param fn

      a function to map over every chunk in the stream. May be async. May be a stream or generator.

      @returns

      a stream flat-mapped with the function fn.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
      options?: ArrayOptions
      ): Promise<void>;

      This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

      This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

      This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise for when the stream has finished.

    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • isPaused(): boolean;

      The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

      const readable = new stream.Readable();
      
      readable.isPaused(); // === false
      readable.pause();
      readable.isPaused(); // === true
      readable.resume();
      readable.isPaused(); // === false
      
    • options?: { destroyOnReturn: boolean }
      ): AsyncIterator<any>;

      The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

      @param fn

      a function to map over every chunk in the stream. Async or not.

      @returns

      a stream mapped with the function fn.

    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • pause(): this;

      The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

      const readable = getReadableStreamSomehow();
      readable.on('data', (chunk) => {
        console.log(`Received ${chunk.length} bytes of data.`);
        readable.pause();
        console.log('There will be no additional data for 1 second.');
        setTimeout(() => {
          console.log('Now data will start flowing again.');
          readable.resume();
        }, 1000);
      });
      

      The readable.pause() method has no effect if there is a 'readable' event listener.

    • pipe<T extends WritableStream>(
      destination: T,
      options?: { end: boolean }
      ): T;
    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • chunk: any,
      encoding?: BufferEncoding
      ): boolean;
    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • size?: number
      ): any;

      The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

      The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

      If the size argument is not specified, all of the data contained in the internal buffer will be returned.

      The size argument must be less than or equal to 1 GiB.

      The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

      const readable = getReadableStreamSomehow();
      
      // 'readable' may be triggered multiple times as data is buffered in
      readable.on('readable', () => {
        let chunk;
        console.log('Stream is readable (new data received in buffer)');
        // Use a loop to make sure we read all currently available data
        while (null !== (chunk = readable.read())) {
          console.log(`Read ${chunk.length} bytes of data...`);
        }
      });
      
      // 'end' will be triggered once when there is no more data available
      readable.on('end', () => {
        console.log('Reached end of stream.');
      });
      

      Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

      Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

      const chunks = [];
      
      readable.on('readable', () => {
        let chunk;
        while (null !== (chunk = readable.read())) {
          chunks.push(chunk);
        }
      });
      
      readable.on('end', () => {
        const content = chunks.join('');
      });
      

      A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

      If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

      Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

      @param size

      Optional argument to specify how much data to read.

    • reduce<T = any>(
      fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial?: undefined,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

      reduce<T = any>(
      fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial: T,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • event: 'close',
      listener: () => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • resume(): this;

      The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

      The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

      getReadableStreamSomehow()
        .resume()
        .on('end', () => {
          console.log('Reached the end, but did not read anything.');
        });
      

      The readable.resume() method has no effect if there is a 'readable' event listener.

    • auto_padding?: boolean
      ): this;

      When data has been encrypted without standard block padding, calling decipher.setAutoPadding(false) will disable automatic padding to prevent decipher.final() from checking for and removing padding.

      Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.

      The decipher.setAutoPadding() method must be called before decipher.final().

      @returns

      for method chaining.

    • encoding: BufferEncoding
      ): this;

      The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

      @param encoding

      The new default encoding

    • encoding: BufferEncoding
      ): this;

      The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

      By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

      The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

      const readable = getReadableStreamSomehow();
      readable.setEncoding('utf8');
      readable.on('data', (chunk) => {
        assert.equal(typeof chunk, 'string');
        console.log('Got %d characters of string data:', chunk.length);
      });
      
      @param encoding

      The encoding to use.

    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks.

      @param limit

      the number of chunks to take from the readable.

      @returns

      a stream with limit chunks taken.

    • options?: Pick<ArrayOptions, 'signal'>
      ): Promise<any[]>;

      This method allows easily obtaining the contents of a stream.

      As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

      @returns

      a promise containing an array with the contents of the stream.

    • uncork(): void;

      The writable.uncork() method flushes all data buffered since cork was called.

      When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

      stream.cork();
      stream.write('some ');
      stream.write('data ');
      process.nextTick(() => stream.uncork());
      

      If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

      stream.cork();
      stream.write('some ');
      stream.cork();
      stream.write('data ');
      process.nextTick(() => {
        stream.uncork();
        // The data will not be flushed until uncork() is called a second time.
        stream.uncork();
      });
      

      See also: writable.cork().

    • destination?: WritableStream
      ): this;

      The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

      If the destination is not specified, then all pipes are detached.

      If the destination is specified, but no pipe is set up for it, then the method does nothing.

      import fs from 'node:fs';
      const readable = getReadableStreamSomehow();
      const writable = fs.createWriteStream('file.txt');
      // All the data from readable goes into 'file.txt',
      // but only for the first second.
      readable.pipe(writable);
      setTimeout(() => {
        console.log('Stop writing to file.txt.');
        readable.unpipe(writable);
        console.log('Manually close the file stream.');
        writable.end();
      }, 1000);
      
      @param destination

      Optional specific stream to unpipe

    • chunk: any,
      encoding?: BufferEncoding
      ): void;

      Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

      The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

      The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

      Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

      // Pull off a header delimited by \n\n.
      // Use unshift() if we get too much.
      // Call the callback with (error, header, stream).
      import { StringDecoder } from 'node:string_decoder';
      function parseHeader(stream, callback) {
        stream.on('error', callback);
        stream.on('readable', onReadable);
        const decoder = new StringDecoder('utf8');
        let header = '';
        function onReadable() {
          let chunk;
          while (null !== (chunk = stream.read())) {
            const str = decoder.write(chunk);
            if (str.includes('\n\n')) {
              // Found the header boundary.
              const split = str.split(/\n\n/);
              header += split.shift();
              const remaining = split.join('\n\n');
              const buf = Buffer.from(remaining, 'utf8');
              stream.removeListener('error', callback);
              // Remove the 'readable' listener before unshifting.
              stream.removeListener('readable', onReadable);
              if (buf.length)
                stream.unshift(buf);
              // Now the body of the message can be read from the stream.
              callback(null, header, stream);
              return;
            }
            // Still reading the header.
            header += str;
          }
        }
      }
      

      Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

      @param chunk

      Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

      @param encoding

      Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

    • data: ArrayBufferView
      ): Buffer;

      Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

      The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

      The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

      data: string,
      inputEncoding: Encoding
      ): Buffer;

      Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

      The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

      The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

      @param inputEncoding

      The encoding of the data string.

      data: ArrayBufferView,
      inputEncoding: undefined,
      outputEncoding: Encoding
      ): string;

      Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

      The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

      The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

      @param inputEncoding

      The encoding of the data string.

      @param outputEncoding

      The encoding of the return value.

      data: string,
      inputEncoding: undefined | Encoding,
      outputEncoding: Encoding
      ): string;

      Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

      The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

      The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

      @param inputEncoding

      The encoding of the data string.

      @param outputEncoding

      The encoding of the return value.

    • stream: ReadableStream
      ): this;

      Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

      When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

      It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

      import { OldReader } from './old-api-module.js';
      import { Readable } from 'node:stream';
      const oreader = new OldReader();
      const myReader = new Readable().wrap(oreader);
      
      myReader.on('readable', () => {
        myReader.read(); // etc.
      });
      
      @param stream

      An "old style" readable stream

    • chunk: any,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

      chunk: any,
      encoding: BufferEncoding,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding, if chunk is a string.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • signal: AbortSignal,
      resource: (event: Event) => void
      ): Disposable;

      Listens once to the abort event on the provided signal.

      Listening to the abort event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can call e.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.

      This API allows safely using AbortSignals in Node.js APIs by solving these two issues by listening to the event such that stopImmediatePropagation does not prevent the listener from running.

      Returns a disposable so that it may be unsubscribed from more easily.

      import { addAbortListener } from 'node:events';
      
      function example(signal) {
        let disposable;
        try {
          signal.addEventListener('abort', (e) => e.stopImmediatePropagation());
          disposable = addAbortListener(signal, (e) => {
            // Do something when signal is aborted.
          });
        } finally {
          disposable?.[Symbol.dispose]();
        }
      }
      
      @returns

      Disposable that removes the abort listener.

    • static from(
      src: string | Object | Stream | ArrayBuffer | Blob | Iterable<any, any, any> | AsyncIterable<any, any, any> | AsyncGeneratorFunction | Promise<any>
      ): Duplex;

      A utility method for creating duplex streams.

      • Stream converts writable stream into writable Duplex and readable stream to Duplex.
      • Blob converts into readable Duplex.
      • string converts into readable Duplex.
      • ArrayBuffer converts into readable Duplex.
      • AsyncIterable converts into a readable Duplex. Cannot yield null.
      • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
      • AsyncFunction converts into a writable Duplex. Must return either null or undefined
      • Object ({ writable, readable }) converts readable and writable into Stream and then combines them into Duplex where the Duplex will write to the writable and read from the readable.
      • Promise converts into readable Duplex. Value null is ignored.
    • static fromWeb(
      duplexStream: { readable: ReadableStream; writable: WritableStream },
      options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
      ): Duplex;

      A utility method for creating a Duplex from a web ReadableStream and WritableStream.

    • emitter: EventEmitter<DefaultEventMap> | EventTarget,
      name: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      For EventEmitters this behaves exactly the same as calling .listeners on the emitter.

      For EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.

      import { getEventListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        const listener = () => console.log('Events are fun');
        ee.on('foo', listener);
        console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ]
      }
      {
        const et = new EventTarget();
        const listener = () => console.log('Events are fun');
        et.addEventListener('foo', listener);
        console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ]
      }
      
    • emitter: EventEmitter<DefaultEventMap> | EventTarget
      ): number;

      Returns the currently set max amount of listeners.

      For EventEmitters this behaves exactly the same as calling .getMaxListeners on the emitter.

      For EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.

      import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        console.log(getMaxListeners(ee)); // 10
        setMaxListeners(11, ee);
        console.log(getMaxListeners(ee)); // 11
      }
      {
        const et = new EventTarget();
        console.log(getMaxListeners(et)); // 10
        setMaxListeners(11, et);
        console.log(getMaxListeners(et)); // 11
      }
      
    • static on(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

      static on(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

    • static once(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
      static once(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
    • n?: number,
      ...eventTargets: EventEmitter<DefaultEventMap> | EventTarget[]
      ): void;
      import { setMaxListeners, EventEmitter } from 'node:events';
      
      const target = new EventTarget();
      const emitter = new EventEmitter();
      
      setMaxListeners(5, target, emitter);
      
      @param n

      A non-negative number. The maximum number of listeners per EventTarget event.

      @param eventTargets

      Zero or more {EventTarget} or {EventEmitter} instances. If none are specified, n is set as the default max for all newly created {EventTarget} and {EventEmitter} objects.

    • static toWeb(
      streamDuplex: Duplex
      ): { readable: ReadableStream; writable: WritableStream };

      A utility method for creating a web ReadableStream and WritableStream from a Duplex.

  • class DiffieHellman

    The DiffieHellman class is a utility for creating Diffie-Hellman key exchanges.

    Instances of the DiffieHellman class can be created using the createDiffieHellman function.

    import assert from 'node:assert';
    
    const {
      createDiffieHellman,
    } = await import('node:crypto');
    
    // Generate Alice's keys...
    const alice = createDiffieHellman(2048);
    const aliceKey = alice.generateKeys();
    
    // Generate Bob's keys...
    const bob = createDiffieHellman(alice.getPrime(), alice.getGenerator());
    const bobKey = bob.generateKeys();
    
    // Exchange and generate the secret...
    const aliceSecret = alice.computeSecret(bobKey);
    const bobSecret = bob.computeSecret(aliceKey);
    
    // OK
    assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('hex'));
    
    • verifyError: number

      A bit field containing any warnings and/or errors resulting from a check performed during initialization of the DiffieHellman object.

      The following values are valid for this property (as defined in node:constants module):

      • DH_CHECK_P_NOT_SAFE_PRIME
      • DH_CHECK_P_NOT_PRIME
      • DH_UNABLE_TO_CHECK_GENERATOR
      • DH_NOT_SUITABLE_GENERATOR
    • otherPublicKey: ArrayBufferView,
      inputEncoding?: null,
      outputEncoding?: null
      ): Buffer;

      Computes the shared secret using otherPublicKey as the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specified inputEncoding, and secret is encoded using specified outputEncoding. If the inputEncoding is not provided, otherPublicKey is expected to be a Buffer, TypedArray, or DataView.

      If outputEncoding is given a string is returned; otherwise, a Buffer is returned.

      @param inputEncoding

      The encoding of an otherPublicKey string.

      @param outputEncoding

      The encoding of the return value.

      otherPublicKey: string,
      inputEncoding: BinaryToTextEncoding,
      outputEncoding?: null
      ): Buffer;

      Computes the shared secret using otherPublicKey as the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specified inputEncoding, and secret is encoded using specified outputEncoding. If the inputEncoding is not provided, otherPublicKey is expected to be a Buffer, TypedArray, or DataView.

      If outputEncoding is given a string is returned; otherwise, a Buffer is returned.

      @param inputEncoding

      The encoding of an otherPublicKey string.

      @param outputEncoding

      The encoding of the return value.

      otherPublicKey: ArrayBufferView,
      inputEncoding: null,
      outputEncoding: BinaryToTextEncoding
      ): string;

      Computes the shared secret using otherPublicKey as the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specified inputEncoding, and secret is encoded using specified outputEncoding. If the inputEncoding is not provided, otherPublicKey is expected to be a Buffer, TypedArray, or DataView.

      If outputEncoding is given a string is returned; otherwise, a Buffer is returned.

      @param inputEncoding

      The encoding of an otherPublicKey string.

      @param outputEncoding

      The encoding of the return value.

      otherPublicKey: string,
      inputEncoding: BinaryToTextEncoding,
      outputEncoding: BinaryToTextEncoding
      ): string;

      Computes the shared secret using otherPublicKey as the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specified inputEncoding, and secret is encoded using specified outputEncoding. If the inputEncoding is not provided, otherPublicKey is expected to be a Buffer, TypedArray, or DataView.

      If outputEncoding is given a string is returned; otherwise, a Buffer is returned.

      @param inputEncoding

      The encoding of an otherPublicKey string.

      @param outputEncoding

      The encoding of the return value.

    • Generates private and public Diffie-Hellman key values unless they have been generated or computed already, and returns the public key in the specified encoding. This key should be transferred to the other party. If encoding is provided a string is returned; otherwise a Buffer is returned.

      This function is a thin wrapper around DH_generate_key(). In particular, once a private key has been generated or set, calling this function only updates the public key but does not generate a new private key.

      ): string;

      Generates private and public Diffie-Hellman key values unless they have been generated or computed already, and returns the public key in the specified encoding. This key should be transferred to the other party. If encoding is provided a string is returned; otherwise a Buffer is returned.

      This function is a thin wrapper around DH_generate_key(). In particular, once a private key has been generated or set, calling this function only updates the public key but does not generate a new private key.

      @param encoding

      The encoding of the return value.

    • Returns the Diffie-Hellman generator in the specified encoding. If encoding is provided a string is returned; otherwise a Buffer is returned.

      ): string;

      Returns the Diffie-Hellman generator in the specified encoding. If encoding is provided a string is returned; otherwise a Buffer is returned.

      @param encoding

      The encoding of the return value.

    • Returns the Diffie-Hellman prime in the specified encoding. If encoding is provided a string is returned; otherwise a Buffer is returned.

      ): string;

      Returns the Diffie-Hellman prime in the specified encoding. If encoding is provided a string is returned; otherwise a Buffer is returned.

      @param encoding

      The encoding of the return value.

    • Returns the Diffie-Hellman private key in the specified encoding. If encoding is provided a string is returned; otherwise a Buffer is returned.

      ): string;

      Returns the Diffie-Hellman private key in the specified encoding. If encoding is provided a string is returned; otherwise a Buffer is returned.

      @param encoding

      The encoding of the return value.

    • Returns the Diffie-Hellman public key in the specified encoding. If encoding is provided a string is returned; otherwise a Buffer is returned.

      ): string;

      Returns the Diffie-Hellman public key in the specified encoding. If encoding is provided a string is returned; otherwise a Buffer is returned.

      @param encoding

      The encoding of the return value.

    • privateKey: ArrayBufferView
      ): void;

      Sets the Diffie-Hellman private key. If the encoding argument is provided,privateKey is expected to be a string. If no encoding is provided, privateKey is expected to be a Buffer, TypedArray, or DataView.

      This function does not automatically compute the associated public key. Either diffieHellman.setPublicKey() or diffieHellman.generateKeys() can be used to manually provide the public key or to automatically derive it.

      privateKey: string,
      encoding: BufferEncoding
      ): void;

      Sets the Diffie-Hellman private key. If the encoding argument is provided,privateKey is expected to be a string. If no encoding is provided, privateKey is expected to be a Buffer, TypedArray, or DataView.

      This function does not automatically compute the associated public key. Either diffieHellman.setPublicKey() or diffieHellman.generateKeys() can be used to manually provide the public key or to automatically derive it.

      @param encoding

      The encoding of the privateKey string.

    • publicKey: ArrayBufferView
      ): void;

      Sets the Diffie-Hellman public key. If the encoding argument is provided, publicKey is expected to be a string. If no encoding is provided, publicKey is expected to be a Buffer, TypedArray, or DataView.

      publicKey: string,
      encoding: BufferEncoding
      ): void;

      Sets the Diffie-Hellman public key. If the encoding argument is provided, publicKey is expected to be a string. If no encoding is provided, publicKey is expected to be a Buffer, TypedArray, or DataView.

      @param encoding

      The encoding of the publicKey string.

  • class ECDH

    The ECDH class is a utility for creating Elliptic Curve Diffie-Hellman (ECDH) key exchanges.

    Instances of the ECDH class can be created using the createECDH function.

    import assert from 'node:assert';
    
    const {
      createECDH,
    } = await import('node:crypto');
    
    // Generate Alice's keys...
    const alice = createECDH('secp521r1');
    const aliceKey = alice.generateKeys();
    
    // Generate Bob's keys...
    const bob = createECDH('secp521r1');
    const bobKey = bob.generateKeys();
    
    // Exchange and generate the secret...
    const aliceSecret = alice.computeSecret(bobKey);
    const bobSecret = bob.computeSecret(aliceKey);
    
    assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('hex'));
    // OK
    
    • otherPublicKey: ArrayBufferView
      ): Buffer;

      Computes the shared secret using otherPublicKey as the other party's public key and returns the computed shared secret. The supplied key is interpreted using specified inputEncoding, and the returned secret is encoded using the specified outputEncoding. If the inputEncoding is not provided, otherPublicKey is expected to be a Buffer, TypedArray, or DataView.

      If outputEncoding is given a string will be returned; otherwise a Buffer is returned.

      ecdh.computeSecret will throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY error when otherPublicKey lies outside of the elliptic curve. Since otherPublicKey is usually supplied from a remote user over an insecure network, be sure to handle this exception accordingly.

      otherPublicKey: string,
      inputEncoding: BinaryToTextEncoding
      ): Buffer;

      Computes the shared secret using otherPublicKey as the other party's public key and returns the computed shared secret. The supplied key is interpreted using specified inputEncoding, and the returned secret is encoded using the specified outputEncoding. If the inputEncoding is not provided, otherPublicKey is expected to be a Buffer, TypedArray, or DataView.

      If outputEncoding is given a string will be returned; otherwise a Buffer is returned.

      ecdh.computeSecret will throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY error when otherPublicKey lies outside of the elliptic curve. Since otherPublicKey is usually supplied from a remote user over an insecure network, be sure to handle this exception accordingly.

      @param inputEncoding

      The encoding of the otherPublicKey string.

      otherPublicKey: ArrayBufferView,
      outputEncoding: BinaryToTextEncoding
      ): string;

      Computes the shared secret using otherPublicKey as the other party's public key and returns the computed shared secret. The supplied key is interpreted using specified inputEncoding, and the returned secret is encoded using the specified outputEncoding. If the inputEncoding is not provided, otherPublicKey is expected to be a Buffer, TypedArray, or DataView.

      If outputEncoding is given a string will be returned; otherwise a Buffer is returned.

      ecdh.computeSecret will throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY error when otherPublicKey lies outside of the elliptic curve. Since otherPublicKey is usually supplied from a remote user over an insecure network, be sure to handle this exception accordingly.

      @param outputEncoding

      The encoding of the return value.

      otherPublicKey: string,
      inputEncoding: BinaryToTextEncoding,
      outputEncoding: BinaryToTextEncoding
      ): string;

      Computes the shared secret using otherPublicKey as the other party's public key and returns the computed shared secret. The supplied key is interpreted using specified inputEncoding, and the returned secret is encoded using the specified outputEncoding. If the inputEncoding is not provided, otherPublicKey is expected to be a Buffer, TypedArray, or DataView.

      If outputEncoding is given a string will be returned; otherwise a Buffer is returned.

      ecdh.computeSecret will throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY error when otherPublicKey lies outside of the elliptic curve. Since otherPublicKey is usually supplied from a remote user over an insecure network, be sure to handle this exception accordingly.

      @param inputEncoding

      The encoding of the otherPublicKey string.

      @param outputEncoding

      The encoding of the return value.

    • Generates private and public EC Diffie-Hellman key values, and returns the public key in the specified format and encoding. This key should be transferred to the other party.

      The format argument specifies point encoding and can be 'compressed' or 'uncompressed'. If format is not specified, the point will be returned in'uncompressed' format.

      If encoding is provided a string is returned; otherwise a Buffer is returned.

      format?: ECDHKeyFormat
      ): string;

      Generates private and public EC Diffie-Hellman key values, and returns the public key in the specified format and encoding. This key should be transferred to the other party.

      The format argument specifies point encoding and can be 'compressed' or 'uncompressed'. If format is not specified, the point will be returned in'uncompressed' format.

      If encoding is provided a string is returned; otherwise a Buffer is returned.

      @param encoding

      The encoding of the return value.

    • If encoding is specified, a string is returned; otherwise a Buffer is returned.

      @returns

      The EC Diffie-Hellman in the specified encoding.

      ): string;

      If encoding is specified, a string is returned; otherwise a Buffer is returned.

      @param encoding

      The encoding of the return value.

      @returns

      The EC Diffie-Hellman in the specified encoding.

    • encoding?: null,
      format?: ECDHKeyFormat
      ): Buffer;

      The format argument specifies point encoding and can be 'compressed' or 'uncompressed'. If format is not specified the point will be returned in'uncompressed' format.

      If encoding is specified, a string is returned; otherwise a Buffer is returned.

      @param encoding

      The encoding of the return value.

      @returns

      The EC Diffie-Hellman public key in the specified encoding and format.

      format?: ECDHKeyFormat
      ): string;

      The format argument specifies point encoding and can be 'compressed' or 'uncompressed'. If format is not specified the point will be returned in'uncompressed' format.

      If encoding is specified, a string is returned; otherwise a Buffer is returned.

      @param encoding

      The encoding of the return value.

      @returns

      The EC Diffie-Hellman public key in the specified encoding and format.

    • privateKey: ArrayBufferView
      ): void;

      Sets the EC Diffie-Hellman private key. If encoding is provided, privateKey is expected to be a string; otherwise privateKey is expected to be a Buffer, TypedArray, or DataView.

      If privateKey is not valid for the curve specified when the ECDH object was created, an error is thrown. Upon setting the private key, the associated public point (key) is also generated and set in the ECDH object.

      privateKey: string,
      ): void;

      Sets the EC Diffie-Hellman private key. If encoding is provided, privateKey is expected to be a string; otherwise privateKey is expected to be a Buffer, TypedArray, or DataView.

      If privateKey is not valid for the curve specified when the ECDH object was created, an error is thrown. Upon setting the private key, the associated public point (key) is also generated and set in the ECDH object.

      @param encoding

      The encoding of the privateKey string.

    • static convertKey(
      curve: string,
      inputEncoding?: BinaryToTextEncoding,
      outputEncoding?: 'latin1' | 'base64' | 'base64url' | 'hex',
      format?: 'uncompressed' | 'compressed' | 'hybrid'
      ): string | Buffer<ArrayBufferLike>;

      Converts the EC Diffie-Hellman public key specified by key and curve to the format specified by format. The format argument specifies point encoding and can be 'compressed', 'uncompressed' or 'hybrid'. The supplied key is interpreted using the specified inputEncoding, and the returned key is encoded using the specified outputEncoding.

      Use getCurves to obtain a list of available curve names. On recent OpenSSL releases, openssl ecparam -list_curves will also display the name and description of each available elliptic curve.

      If format is not specified the point will be returned in 'uncompressed' format.

      If the inputEncoding is not provided, key is expected to be a Buffer, TypedArray, or DataView.

      Example (uncompressing a key):

      const {
        createECDH,
        ECDH,
      } = await import('node:crypto');
      
      const ecdh = createECDH('secp256k1');
      ecdh.generateKeys();
      
      const compressedKey = ecdh.getPublicKey('hex', 'compressed');
      
      const uncompressedKey = ECDH.convertKey(compressedKey,
                                              'secp256k1',
                                              'hex',
                                              'hex',
                                              'uncompressed');
      
      // The converted key and the uncompressed public key should be the same
      console.log(uncompressedKey === ecdh.getPublicKey('hex'));
      
      @param inputEncoding

      The encoding of the key string.

      @param outputEncoding

      The encoding of the return value.

  • class Hash

    The Hash class is a utility for creating hash digests of data. It can be used in one of two ways:

    • As a stream that is both readable and writable, where data is written to produce a computed hash digest on the readable side, or
    • Using the hash.update() and hash.digest() methods to produce the computed hash.

    The createHash method is used to create Hash instances. Hashobjects are not to be created directly using the new keyword.

    Example: Using Hash objects as streams:

    const {
      createHash,
    } = await import('node:crypto');
    
    const hash = createHash('sha256');
    
    hash.on('readable', () => {
      // Only one element is going to be produced by the
      // hash stream.
      const data = hash.read();
      if (data) {
        console.log(data.toString('hex'));
        // Prints:
        //   6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50
      }
    });
    
    hash.write('some data to hash');
    hash.end();
    

    Example: Using Hash and piped streams:

    import { createReadStream } from 'node:fs';
    import { stdout } from 'node:process';
    const { createHash } = await import('node:crypto');
    
    const hash = createHash('sha256');
    
    const input = createReadStream('test.js');
    input.pipe(hash).setEncoding('hex').pipe(stdout);
    

    Example: Using the hash.update() and hash.digest() methods:

    const {
      createHash,
    } = await import('node:crypto');
    
    const hash = createHash('sha256');
    
    hash.update('some data to hash');
    console.log(hash.digest('hex'));
    // Prints:
    //   6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50
    
    • allowHalfOpen: boolean

      If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

      This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

    • readonly closed: boolean

      Is true after 'close' has been emitted.

    • destroyed: boolean

      Is true after readable.destroy() has been called.

    • readonly errored: null | Error

      Returns error if the stream has been destroyed with an error.

    • readable: boolean

      Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

    • readonly readableAborted: boolean

      Returns whether the stream was destroyed or errored before emitting 'end'.

    • readonly readableDidRead: boolean

      Returns whether 'data' has been emitted.

    • readonly readableEncoding: null | BufferEncoding

      Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

    • readonly readableEnded: boolean

      Becomes true when 'end' event is emitted.

    • readonly readableFlowing: null | boolean

      This property reflects the current state of a Readable stream as described in the Three states section.

    • readonly readableHighWaterMark: number

      Returns the value of highWaterMark passed when creating this Readable.

    • readonly readableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

    • readonly readableObjectMode: boolean

      Getter for the property objectMode of a given Readable stream.

    • readonly writable: boolean

      Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

    • readonly writableCorked: number

      Number of times writable.uncork() needs to be called in order to fully uncork the stream.

    • readonly writableEnded: boolean

      Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

    • readonly writableFinished: boolean

      Is set to true immediately before the 'finish' event is emitted.

    • readonly writableHighWaterMark: number

      Return the value of highWaterMark passed when creating this Writable.

    • readonly writableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

    • readonly writableNeedDrain: boolean

      Is true if the stream's buffer has been full and stream will emit 'drain'.

    • readonly writableObjectMode: boolean

      Getter for the property objectMode of a given Writable stream.

    • static captureRejections: boolean

      Value: boolean

      Change the default captureRejections option on all new EventEmitter objects.

    • readonly static captureRejectionSymbol: typeof captureRejectionSymbol

      Value: Symbol.for('nodejs.rejection')

      See how to write a custom rejection handler.

    • static defaultMaxListeners: number

      By default, a maximum of 10 listeners can be registered for any single event. This limit can be changed for individual EventEmitter instances using the emitter.setMaxListeners(n) method. To change the default for allEventEmitter instances, the events.defaultMaxListeners property can be used. If this value is not a positive number, a RangeError is thrown.

      Take caution when setting the events.defaultMaxListeners because the change affects all EventEmitter instances, including those created before the change is made. However, calling emitter.setMaxListeners(n) still has precedence over events.defaultMaxListeners.

      This is not a hard limit. The EventEmitter instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any single EventEmitter, the emitter.getMaxListeners() and emitter.setMaxListeners() methods can be used to temporarily avoid this warning:

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.setMaxListeners(emitter.getMaxListeners() + 1);
      emitter.once('event', () => {
        // do stuff
        emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
      });
      

      The --trace-warnings command-line flag can be used to display the stack trace for such warnings.

      The emitted warning can be inspected with process.on('warning') and will have the additional emitter, type, and count properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Its name property is set to 'MaxListenersExceededWarning'.

    • readonly static errorMonitor: typeof errorMonitor

      This symbol shall be used to install a listener for only monitoring 'error' events. Listeners installed using this symbol are called before the regular 'error' listeners are called.

      Installing a listener using this symbol does not change the behavior once an 'error' event is emitted. Therefore, the process will still crash if no regular 'error' listener is installed.

    • callback: (error?: null | Error) => void
      ): void;
    • error: null | Error,
      callback: (error?: null | Error) => void
      ): void;
    • callback: (error?: null | Error) => void
      ): void;
    • ): void;
    • size: number
      ): void;
    • chunk: any,
      encoding: BufferEncoding,
      ): void;
    • chunk: any,
      encoding: BufferEncoding,
      callback: (error?: null | Error) => void
      ): void;
    • chunks: { chunk: any; encoding: BufferEncoding }[],
      callback: (error?: null | Error) => void
      ): void;
    • [Symbol.asyncDispose](): Promise<void>;

      Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

    • [Symbol.asyncIterator](): AsyncIterator<any>;
    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'close',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'data',
      listener: (chunk: any) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'drain',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'end',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'error',
      listener: (err: Error) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'finish',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'pause',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'readable',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'resume',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. data
      3. drain
      4. end
      5. error
      6. finish
      7. pause
      8. pipe
      9. readable
      10. resume
      11. unpipe
    • options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

      @returns

      a stream of indexed pairs.

    • compose<T extends ReadableStream>(
      stream: T | ComposeFnParam | Iterable<T, any, any> | AsyncIterable<T, any, any>,
      options?: { signal: AbortSignal }
      ): T;
    • options?: HashOptions
      ): Hash;

      Creates a new Hash object that contains a deep copy of the internal state of the current Hash object.

      The optional options argument controls stream behavior. For XOF hash functions such as 'shake256', the outputLength option can be used to specify the desired output length in bytes.

      An error is thrown when an attempt is made to copy the Hash object after its hash.digest() method has been called.

      // Calculate a rolling hash.
      const {
        createHash,
      } = await import('node:crypto');
      
      const hash = createHash('sha256');
      
      hash.update('one');
      console.log(hash.copy().digest('hex'));
      
      hash.update('two');
      console.log(hash.copy().digest('hex'));
      
      hash.update('three');
      console.log(hash.copy().digest('hex'));
      
      // Etc.
      
      @param options

      stream.transform options

    • cork(): void;

      The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

      The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

      See also: writable.uncork(), writable._writev().

    • error?: Error
      ): this;

      Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

      Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

      Implementors should not override this method, but instead implement readable._destroy().

      @param error

      Error which will be passed as payload in 'error' event

    • Calculates the digest of all of the data passed to be hashed (using the hash.update() method). If encoding is provided a string will be returned; otherwise a Buffer is returned.

      The Hash object can not be used again after hash.digest() method has been called. Multiple calls will cause an error to be thrown.

      ): string;

      Calculates the digest of all of the data passed to be hashed (using the hash.update() method). If encoding is provided a string will be returned; otherwise a Buffer is returned.

      The Hash object can not be used again after hash.digest() method has been called. Multiple calls will cause an error to be thrown.

      @param encoding

      The encoding of the return value.

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks dropped from the start.

      @param limit

      the number of chunks to drop from the readable.

      @returns

      a stream with limit chunks dropped from the start.

    • event: 'close'
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'data',
      chunk: any
      ): boolean;
      event: 'drain'
      ): boolean;
      event: 'end'
      ): boolean;
      event: 'error',
      err: Error
      ): boolean;
      event: 'finish'
      ): boolean;
      event: 'pause'
      ): boolean;
      event: 'pipe',
      ): boolean;
      event: 'readable'
      ): boolean;
      event: 'resume'
      ): boolean;
      event: 'unpipe',
      ): boolean;
      event: string | symbol,
      ...args: any[]
      ): boolean;
    • cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      chunk: any,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      chunk: any,
      encoding: BufferEncoding,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding if chunk is a string

    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for every one of the chunks.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions

      This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

      @param fn

      a function to filter chunks from the stream. Async or not.

      @returns

      a stream filtered with the predicate fn.

    • find<T>(
      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
      options?: ArrayOptions
      ): Promise<undefined | T>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<any>;

      This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

      It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

      @param fn

      a function to map over every chunk in the stream. May be async. May be a stream or generator.

      @returns

      a stream flat-mapped with the function fn.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
      options?: ArrayOptions
      ): Promise<void>;

      This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

      This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

      This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise for when the stream has finished.

    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • isPaused(): boolean;

      The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

      const readable = new stream.Readable();
      
      readable.isPaused(); // === false
      readable.pause();
      readable.isPaused(); // === true
      readable.resume();
      readable.isPaused(); // === false
      
    • options?: { destroyOnReturn: boolean }
      ): AsyncIterator<any>;

      The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
      options?: ArrayOptions

      This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

      @param fn

      a function to map over every chunk in the stream. Async or not.

      @returns

      a stream mapped with the function fn.

    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • pause(): this;

      The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

      const readable = getReadableStreamSomehow();
      readable.on('data', (chunk) => {
        console.log(`Received ${chunk.length} bytes of data.`);
        readable.pause();
        console.log('There will be no additional data for 1 second.');
        setTimeout(() => {
          console.log('Now data will start flowing again.');
          readable.resume();
        }, 1000);
      });
      

      The readable.pause() method has no effect if there is a 'readable' event listener.

    • pipe<T extends WritableStream>(
      destination: T,
      options?: { end: boolean }
      ): T;
    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • chunk: any,
      encoding?: BufferEncoding
      ): boolean;
    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • size?: number
      ): any;

      The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

      The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

      If the size argument is not specified, all of the data contained in the internal buffer will be returned.

      The size argument must be less than or equal to 1 GiB.

      The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

      const readable = getReadableStreamSomehow();
      
      // 'readable' may be triggered multiple times as data is buffered in
      readable.on('readable', () => {
        let chunk;
        console.log('Stream is readable (new data received in buffer)');
        // Use a loop to make sure we read all currently available data
        while (null !== (chunk = readable.read())) {
          console.log(`Read ${chunk.length} bytes of data...`);
        }
      });
      
      // 'end' will be triggered once when there is no more data available
      readable.on('end', () => {
        console.log('Reached end of stream.');
      });
      

      Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

      Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

      const chunks = [];
      
      readable.on('readable', () => {
        let chunk;
        while (null !== (chunk = readable.read())) {
          chunks.push(chunk);
        }
      });
      
      readable.on('end', () => {
        const content = chunks.join('');
      });
      

      A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

      If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

      Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

      @param size

      Optional argument to specify how much data to read.

    • reduce<T = any>(
      fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial?: undefined,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

      reduce<T = any>(
      fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
      initial: T,
      options?: Pick<ArrayOptions, 'signal'>
      ): Promise<T>;

      This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

      If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

      The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

      @param fn

      a reducer function to call over every chunk in the stream. Async or not.

      @param initial

      the initial value to use in the reduction.

      @returns

      a promise for the final value of the reduction.

    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • event: 'close',
      listener: () => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      event: 'data',
      listener: (chunk: any) => void
      ): this;
      event: 'drain',
      listener: () => void
      ): this;
      event: 'end',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pause',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'readable',
      listener: () => void
      ): this;
      event: 'resume',
      listener: () => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • resume(): this;

      The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

      The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

      getReadableStreamSomehow()
        .resume()
        .on('end', () => {
          console.log('Reached the end, but did not read anything.');
        });
      

      The readable.resume() method has no effect if there is a 'readable' event listener.

    • encoding: BufferEncoding
      ): this;

      The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

      @param encoding

      The new default encoding

    • encoding: BufferEncoding
      ): this;

      The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

      By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

      The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

      const readable = getReadableStreamSomehow();
      readable.setEncoding('utf8');
      readable.on('data', (chunk) => {
        assert.equal(typeof chunk, 'string');
        console.log('Got %d characters of string data:', chunk.length);
      });
      
      @param encoding

      The encoding to use.

    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
      options?: ArrayOptions
      ): Promise<boolean>;

      This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

      @param fn

      a function to call on each chunk of the stream. Async or not.

      @returns

      a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

    • limit: number,
      options?: Pick<ArrayOptions, 'signal'>

      This method returns a new stream with the first limit chunks.

      @param limit

      the number of chunks to take from the readable.

      @returns

      a stream with limit chunks taken.

    • options?: Pick<ArrayOptions, 'signal'>
      ): Promise<any[]>;

      This method allows easily obtaining the contents of a stream.

      As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

      @returns

      a promise containing an array with the contents of the stream.

    • uncork(): void;

      The writable.uncork() method flushes all data buffered since cork was called.

      When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

      stream.cork();
      stream.write('some ');
      stream.write('data ');
      process.nextTick(() => stream.uncork());
      

      If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

      stream.cork();
      stream.write('some ');
      stream.cork();
      stream.write('data ');
      process.nextTick(() => {
        stream.uncork();
        // The data will not be flushed until uncork() is called a second time.
        stream.uncork();
      });
      

      See also: writable.cork().

    • destination?: WritableStream
      ): this;

      The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

      If the destination is not specified, then all pipes are detached.

      If the destination is specified, but no pipe is set up for it, then the method does nothing.

      import fs from 'node:fs';
      const readable = getReadableStreamSomehow();
      const writable = fs.createWriteStream('file.txt');
      // All the data from readable goes into 'file.txt',
      // but only for the first second.
      readable.pipe(writable);
      setTimeout(() => {
        console.log('Stop writing to file.txt.');
        readable.unpipe(writable);
        console.log('Manually close the file stream.');
        writable.end();
      }, 1000);
      
      @param destination

      Optional specific stream to unpipe

    • chunk: any,
      encoding?: BufferEncoding
      ): void;

      Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

      The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

      The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

      Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

      // Pull off a header delimited by \n\n.
      // Use unshift() if we get too much.
      // Call the callback with (error, header, stream).
      import { StringDecoder } from 'node:string_decoder';
      function parseHeader(stream, callback) {
        stream.on('error', callback);
        stream.on('readable', onReadable);
        const decoder = new StringDecoder('utf8');
        let header = '';
        function onReadable() {
          let chunk;
          while (null !== (chunk = stream.read())) {
            const str = decoder.write(chunk);
            if (str.includes('\n\n')) {
              // Found the header boundary.
              const split = str.split(/\n\n/);
              header += split.shift();
              const remaining = split.join('\n\n');
              const buf = Buffer.from(remaining, 'utf8');
              stream.removeListener('error', callback);
              // Remove the 'readable' listener before unshifting.
              stream.removeListener('readable', onReadable);
              if (buf.length)
                stream.unshift(buf);
              // Now the body of the message can be read from the stream.
              callback(null, header, stream);
              return;
            }
            // Still reading the header.
            header += str;
          }
        }
      }
      

      Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

      @param chunk

      Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

      @param encoding

      Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

    • ): Hash;

      Updates the hash content with the given data, the encoding of which is given in inputEncoding. If encoding is not provided, and the data is a string, an encoding of 'utf8' is enforced. If data is a Buffer, TypedArray, orDataView, then inputEncoding is ignored.

      This can be called many times with new data as it is streamed.

      data: string,
      inputEncoding: Encoding
      ): Hash;

      Updates the hash content with the given data, the encoding of which is given in inputEncoding. If encoding is not provided, and the data is a string, an encoding of 'utf8' is enforced. If data is a Buffer, TypedArray, orDataView, then inputEncoding is ignored.

      This can be called many times with new data as it is streamed.

      @param inputEncoding

      The encoding of the data string.

    • stream: ReadableStream
      ): this;

      Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

      When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

      It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

      import { OldReader } from './old-api-module.js';
      import { Readable } from 'node:stream';
      const oreader = new OldReader();
      const myReader = new Readable().wrap(oreader);
      
      myReader.on('readable', () => {
        myReader.read(); // etc.
      });
      
      @param stream

      An "old style" readable stream

    • chunk: any,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

      chunk: any,
      encoding: BufferEncoding,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding, if chunk is a string.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • signal: AbortSignal,
      resource: (event: Event) => void
      ): Disposable;

      Listens once to the abort event on the provided signal.

      Listening to the abort event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can call e.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.

      This API allows safely using AbortSignals in Node.js APIs by solving these two issues by listening to the event such that stopImmediatePropagation does not prevent the listener from running.

      Returns a disposable so that it may be unsubscribed from more easily.

      import { addAbortListener } from 'node:events';
      
      function example(signal) {
        let disposable;
        try {
          signal.addEventListener('abort', (e) => e.stopImmediatePropagation());
          disposable = addAbortListener(signal, (e) => {
            // Do something when signal is aborted.
          });
        } finally {
          disposable?.[Symbol.dispose]();
        }
      }
      
      @returns

      Disposable that removes the abort listener.

    • static from(
      src: string | Object | Stream | ArrayBuffer | Blob | Iterable<any, any, any> | AsyncIterable<any, any, any> | AsyncGeneratorFunction | Promise<any>
      ): Duplex;

      A utility method for creating duplex streams.

      • Stream converts writable stream into writable Duplex and readable stream to Duplex.
      • Blob converts into readable Duplex.
      • string converts into readable Duplex.
      • ArrayBuffer converts into readable Duplex.
      • AsyncIterable converts into a readable Duplex. Cannot yield null.
      • AsyncGeneratorFunction converts into a readable/writable transform Duplex. Must take a source AsyncIterable as first parameter. Cannot yield null.
      • AsyncFunction converts into a writable Duplex. Must return either null or undefined
      • Object ({ writable, readable }) converts readable and writable into Stream and then combines them into Duplex where the Duplex will write to the writable and read from the readable.
      • Promise converts into readable Duplex. Value null is ignored.
    • static fromWeb(
      duplexStream: { readable: ReadableStream; writable: WritableStream },
      options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
      ): Duplex;

      A utility method for creating a Duplex from a web ReadableStream and WritableStream.

    • emitter: EventEmitter<DefaultEventMap> | EventTarget,
      name: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      For EventEmitters this behaves exactly the same as calling .listeners on the emitter.

      For EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.

      import { getEventListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        const listener = () => console.log('Events are fun');
        ee.on('foo', listener);
        console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ]
      }
      {
        const et = new EventTarget();
        const listener = () => console.log('Events are fun');
        et.addEventListener('foo', listener);
        console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ]
      }
      
    • emitter: EventEmitter<DefaultEventMap> | EventTarget
      ): number;

      Returns the currently set max amount of listeners.

      For EventEmitters this behaves exactly the same as calling .getMaxListeners on the emitter.

      For EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.

      import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        console.log(getMaxListeners(ee)); // 10
        setMaxListeners(11, ee);
        console.log(getMaxListeners(ee)); // 11
      }
      {
        const et = new EventTarget();
        console.log(getMaxListeners(et)); // 10
        setMaxListeners(11, et);
        console.log(getMaxListeners(et)); // 11
      }
      
    • static on(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

      static on(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

    • static once(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
      static once(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
    • n?: number,
      ...eventTargets: EventEmitter<DefaultEventMap> | EventTarget[]
      ): void;
      import { setMaxListeners, EventEmitter } from 'node:events';
      
      const target = new EventTarget();
      const emitter = new EventEmitter();
      
      setMaxListeners(5, target, emitter);
      
      @param n

      A non-negative number. The maximum number of listeners per EventTarget event.

      @param eventTargets

      Zero or more {EventTarget} or {EventEmitter} instances. If none are specified, n is set as the default max for all newly created {EventTarget} and {EventEmitter} objects.

    • static toWeb(
      streamDuplex: Duplex
      ): { readable: ReadableStream; writable: WritableStream };

      A utility method for creating a web ReadableStream and WritableStream from a Duplex.

  • class KeyObject

    Node.js uses a KeyObject class to represent a symmetric or asymmetric key, and each kind of key exposes different functions. The createSecretKey, createPublicKey and createPrivateKey methods are used to create KeyObjectinstances. KeyObject objects are not to be created directly using the newkeyword.

    Most applications should consider using the new KeyObject API instead of passing keys as strings or Buffers due to improved security features.

    KeyObject instances can be passed to other threads via postMessage(). The receiver obtains a cloned KeyObject, and the KeyObject does not need to be listed in the transferList argument.

    • asymmetricKeyDetails?: AsymmetricKeyDetails

      This property exists only on asymmetric keys. Depending on the type of the key, this object contains information about the key. None of the information obtained through this property can be used to uniquely identify a key or to compromise the security of the key.

      For RSA-PSS keys, if the key material contains a RSASSA-PSS-params sequence, the hashAlgorithm, mgf1HashAlgorithm, and saltLength properties will be set.

      Other key details might be exposed via this API using additional attributes.

    • asymmetricKeyType?: KeyType

      For asymmetric keys, this property represents the type of the key. Supported key types are:

      • 'rsa' (OID 1.2.840.113549.1.1.1)
      • 'rsa-pss' (OID 1.2.840.113549.1.1.10)
      • 'dsa' (OID 1.2.840.10040.4.1)
      • 'ec' (OID 1.2.840.10045.2.1)
      • 'x25519' (OID 1.3.101.110)
      • 'x448' (OID 1.3.101.111)
      • 'ed25519' (OID 1.3.101.112)
      • 'ed448' (OID 1.3.101.113)
      • 'dh' (OID 1.2.840.113549.1.3.1)

      This property is undefined for unrecognized KeyObject types and symmetric keys.

    • symmetricKeySize?: number

      For secret keys, this property represents the size of the key in bytes. This property is undefined for asymmetric keys.

    • type: KeyObjectType

      Depending on the type of this KeyObject, this property is either'secret' for secret (symmetric) keys, 'public' for public (asymmetric) keys or 'private' for private (asymmetric) keys.

    • otherKeyObject: KeyObject
      ): boolean;

      Returns true or false depending on whether the keys have exactly the same type, value, and parameters. This method is not constant time.

      @param otherKeyObject

      A KeyObject with which to compare keyObject.

    • options: KeyExportOptions<'pem'>
      ): string | Buffer<ArrayBufferLike>;

      For symmetric keys, the following encoding options can be used:

      For public keys, the following encoding options can be used:

      For private keys, the following encoding options can be used:

      The result type depends on the selected encoding format, when PEM the result is a string, when DER it will be a buffer containing the data encoded as DER, when JWK it will be an object.

      When JWK encoding format was selected, all other encoding options are ignored.

      PKCS#1, SEC1, and PKCS#8 type keys can be encrypted by using a combination of the cipher and format options. The PKCS#8 type can be used with anyformat to encrypt any key algorithm (RSA, EC, or DH) by specifying acipher. PKCS#1 and SEC1 can only be encrypted by specifying a cipherwhen the PEM format is used. For maximum compatibility, use PKCS#8 for encrypted private keys. Since PKCS#8 defines its own encryption mechanism, PEM-level encryption is not supported when encrypting a PKCS#8 key. See RFC 5208 for PKCS#8 encryption and RFC 1421 for PKCS#1 and SEC1 encryption.

      options?: KeyExportOptions<'der'>
      ): Buffer;

      For symmetric keys, the following encoding options can be used:

      For public keys, the following encoding options can be used:

      For private keys, the following encoding options can be used:

      The result type depends on the selected encoding format, when PEM the result is a string, when DER it will be a buffer containing the data encoded as DER, when JWK it will be an object.

      When JWK encoding format was selected, all other encoding options are ignored.

      PKCS#1, SEC1, and PKCS#8 type keys can be encrypted by using a combination of the cipher and format options. The PKCS#8 type can be used with anyformat to encrypt any key algorithm (RSA, EC, or DH) by specifying acipher. PKCS#1 and SEC1 can only be encrypted by specifying a cipherwhen the PEM format is used. For maximum compatibility, use PKCS#8 for encrypted private keys. Since PKCS#8 defines its own encryption mechanism, PEM-level encryption is not supported when encrypting a PKCS#8 key. See RFC 5208 for PKCS#8 encryption and RFC 1421 for PKCS#1 and SEC1 encryption.

      For symmetric keys, the following encoding options can be used:

      For public keys, the following encoding options can be used:

      For private keys, the following encoding options can be used:

      The result type depends on the selected encoding format, when PEM the result is a string, when DER it will be a buffer containing the data encoded as DER, when JWK it will be an object.

      When JWK encoding format was selected, all other encoding options are ignored.

      PKCS#1, SEC1, and PKCS#8 type keys can be encrypted by using a combination of the cipher and format options. The PKCS#8 type can be used with anyformat to encrypt any key algorithm (RSA, EC, or DH) by specifying acipher. PKCS#1 and SEC1 can only be encrypted by specifying a cipherwhen the PEM format is used. For maximum compatibility, use PKCS#8 for encrypted private keys. Since PKCS#8 defines its own encryption mechanism, PEM-level encryption is not supported when encrypting a PKCS#8 key. See RFC 5208 for PKCS#8 encryption and RFC 1421 for PKCS#1 and SEC1 encryption.

    • extractable: boolean,
      keyUsages: readonly KeyUsage[]

      Converts a KeyObject instance to a CryptoKey.

    • static from(

      Example: Converting a CryptoKey instance to a KeyObject:

      const { KeyObject } = await import('node:crypto');
      const { subtle } = globalThis.crypto;
      
      const key = await subtle.generateKey({
        name: 'HMAC',
        hash: 'SHA-256',
        length: 256,
      }, true, ['sign', 'verify']);
      
      const keyObject = KeyObject.from(key);
      console.log(keyObject.symmetricKeySize);
      // Prints: 32 (symmetric key size in bytes)
      
  • class Sign

    The Sign class is a utility for generating signatures. It can be used in one of two ways:

    • As a writable stream, where data to be signed is written and the sign.sign() method is used to generate and return the signature, or
    • Using the sign.update() and sign.sign() methods to produce the signature.

    The createSign method is used to create Sign instances. The argument is the string name of the hash function to use. Sign objects are not to be created directly using the new keyword.

    Example: Using Sign and Verify objects as streams:

    const {
      generateKeyPairSync,
      createSign,
      createVerify,
    } = await import('node:crypto');
    
    const { privateKey, publicKey } = generateKeyPairSync('ec', {
      namedCurve: 'sect239k1',
    });
    
    const sign = createSign('SHA256');
    sign.write('some data to sign');
    sign.end();
    const signature = sign.sign(privateKey, 'hex');
    
    const verify = createVerify('SHA256');
    verify.write('some data to sign');
    verify.end();
    console.log(verify.verify(publicKey, signature, 'hex'));
    // Prints: true
    

    Example: Using the sign.update() and verify.update() methods:

    const {
      generateKeyPairSync,
      createSign,
      createVerify,
    } = await import('node:crypto');
    
    const { privateKey, publicKey } = generateKeyPairSync('rsa', {
      modulusLength: 2048,
    });
    
    const sign = createSign('SHA256');
    sign.update('some data to sign');
    sign.end();
    const signature = sign.sign(privateKey);
    
    const verify = createVerify('SHA256');
    verify.update('some data to sign');
    verify.end();
    console.log(verify.verify(publicKey, signature));
    // Prints: true
    
    • readonly closed: boolean

      Is true after 'close' has been emitted.

    • destroyed: boolean

      Is true after writable.destroy() has been called.

    • readonly errored: null | Error

      Returns error if the stream has been destroyed with an error.

    • readonly writable: boolean

      Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

    • readonly writableCorked: number

      Number of times writable.uncork() needs to be called in order to fully uncork the stream.

    • readonly writableEnded: boolean

      Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

    • readonly writableFinished: boolean

      Is set to true immediately before the 'finish' event is emitted.

    • readonly writableHighWaterMark: number

      Return the value of highWaterMark passed when creating this Writable.

    • readonly writableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

    • readonly writableNeedDrain: boolean

      Is true if the stream's buffer has been full and stream will emit 'drain'.

    • readonly writableObjectMode: boolean

      Getter for the property objectMode of a given Writable stream.

    • static captureRejections: boolean

      Value: boolean

      Change the default captureRejections option on all new EventEmitter objects.

    • readonly static captureRejectionSymbol: typeof captureRejectionSymbol

      Value: Symbol.for('nodejs.rejection')

      See how to write a custom rejection handler.

    • static defaultMaxListeners: number

      By default, a maximum of 10 listeners can be registered for any single event. This limit can be changed for individual EventEmitter instances using the emitter.setMaxListeners(n) method. To change the default for allEventEmitter instances, the events.defaultMaxListeners property can be used. If this value is not a positive number, a RangeError is thrown.

      Take caution when setting the events.defaultMaxListeners because the change affects all EventEmitter instances, including those created before the change is made. However, calling emitter.setMaxListeners(n) still has precedence over events.defaultMaxListeners.

      This is not a hard limit. The EventEmitter instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any single EventEmitter, the emitter.getMaxListeners() and emitter.setMaxListeners() methods can be used to temporarily avoid this warning:

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.setMaxListeners(emitter.getMaxListeners() + 1);
      emitter.once('event', () => {
        // do stuff
        emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
      });
      

      The --trace-warnings command-line flag can be used to display the stack trace for such warnings.

      The emitted warning can be inspected with process.on('warning') and will have the additional emitter, type, and count properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Its name property is set to 'MaxListenersExceededWarning'.

    • readonly static errorMonitor: typeof errorMonitor

      This symbol shall be used to install a listener for only monitoring 'error' events. Listeners installed using this symbol are called before the regular 'error' listeners are called.

      Installing a listener using this symbol does not change the behavior once an 'error' event is emitted. Therefore, the process will still crash if no regular 'error' listener is installed.

    • callback: (error?: null | Error) => void
      ): void;
    • error: null | Error,
      callback: (error?: null | Error) => void
      ): void;
    • callback: (error?: null | Error) => void
      ): void;
    • chunk: any,
      encoding: BufferEncoding,
      callback: (error?: null | Error) => void
      ): void;
    • chunks: { chunk: any; encoding: BufferEncoding }[],
      callback: (error?: null | Error) => void
      ): void;
    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'close',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'drain',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'error',
      listener: (err: Error) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'finish',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
    • compose<T extends ReadableStream>(
      stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
      options?: { signal: AbortSignal }
      ): T;
    • cork(): void;

      The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

      The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

      See also: writable.uncork(), writable._writev().

    • error?: Error
      ): this;

      Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the writable stream has ended and subsequent calls to write() or end() will result in an ERR_STREAM_DESTROYED error. This is a destructive and immediate way to destroy a stream. Previous calls to write() may not have drained, and may trigger an ERR_STREAM_DESTROYED error. Use end() instead of destroy if data should flush before close, or wait for the 'drain' event before destroying the stream.

      Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

      Implementors should not override this method, but instead implement writable._destroy().

      @param error

      Optional, an error to emit with 'error' event.

    • event: 'close'
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'drain'
      ): boolean;
      event: 'error',
      err: Error
      ): boolean;
      event: 'finish'
      ): boolean;
      event: 'pipe',
      ): boolean;
      event: 'unpipe',
      ): boolean;
      event: string | symbol,
      ...args: any[]
      ): boolean;
    • cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      chunk: any,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      chunk: any,
      encoding: BufferEncoding,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding if chunk is a string

    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • pipe<T extends WritableStream>(
      destination: T,
      options?: { end: boolean }
      ): T;
    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • event: 'close',
      listener: () => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • encoding: BufferEncoding
      ): this;

      The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

      @param encoding

      The new default encoding

    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • ): Buffer;

      Calculates the signature on all the data passed through using either sign.update() or sign.write().

      If privateKey is not a KeyObject, this function behaves as if privateKey had been passed to createPrivateKey. If it is an object, the following additional properties can be passed:

      If outputEncoding is provided a string is returned; otherwise a Buffer is returned.

      The Sign object can not be again used after sign.sign() method has been called. Multiple calls to sign.sign() will result in an error being thrown.

      outputFormat: BinaryToTextEncoding
      ): string;

      Calculates the signature on all the data passed through using either sign.update() or sign.write().

      If privateKey is not a KeyObject, this function behaves as if privateKey had been passed to createPrivateKey. If it is an object, the following additional properties can be passed:

      If outputEncoding is provided a string is returned; otherwise a Buffer is returned.

      The Sign object can not be again used after sign.sign() method has been called. Multiple calls to sign.sign() will result in an error being thrown.

    • uncork(): void;

      The writable.uncork() method flushes all data buffered since cork was called.

      When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

      stream.cork();
      stream.write('some ');
      stream.write('data ');
      process.nextTick(() => stream.uncork());
      

      If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

      stream.cork();
      stream.write('some ');
      stream.cork();
      stream.write('data ');
      process.nextTick(() => {
        stream.uncork();
        // The data will not be flushed until uncork() is called a second time.
        stream.uncork();
      });
      

      See also: writable.cork().

    • ): this;

      Updates the Sign content with the given data, the encoding of which is given in inputEncoding. If encoding is not provided, and the data is a string, an encoding of 'utf8' is enforced. If data is a Buffer, TypedArray, orDataView, then inputEncoding is ignored.

      This can be called many times with new data as it is streamed.

      data: string,
      inputEncoding: Encoding
      ): this;

      Updates the Sign content with the given data, the encoding of which is given in inputEncoding. If encoding is not provided, and the data is a string, an encoding of 'utf8' is enforced. If data is a Buffer, TypedArray, orDataView, then inputEncoding is ignored.

      This can be called many times with new data as it is streamed.

      @param inputEncoding

      The encoding of the data string.

    • chunk: any,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

      chunk: any,
      encoding: BufferEncoding,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding, if chunk is a string.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • signal: AbortSignal,
      resource: (event: Event) => void
      ): Disposable;

      Listens once to the abort event on the provided signal.

      Listening to the abort event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can call e.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.

      This API allows safely using AbortSignals in Node.js APIs by solving these two issues by listening to the event such that stopImmediatePropagation does not prevent the listener from running.

      Returns a disposable so that it may be unsubscribed from more easily.

      import { addAbortListener } from 'node:events';
      
      function example(signal) {
        let disposable;
        try {
          signal.addEventListener('abort', (e) => e.stopImmediatePropagation());
          disposable = addAbortListener(signal, (e) => {
            // Do something when signal is aborted.
          });
        } finally {
          disposable?.[Symbol.dispose]();
        }
      }
      
      @returns

      Disposable that removes the abort listener.

    • static fromWeb(
      writableStream: WritableStream,
      options?: Pick<WritableOptions<Writable>, 'signal' | 'decodeStrings' | 'highWaterMark' | 'objectMode'>

      A utility method for creating a Writable from a web WritableStream.

    • emitter: EventEmitter<DefaultEventMap> | EventTarget,
      name: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      For EventEmitters this behaves exactly the same as calling .listeners on the emitter.

      For EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.

      import { getEventListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        const listener = () => console.log('Events are fun');
        ee.on('foo', listener);
        console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ]
      }
      {
        const et = new EventTarget();
        const listener = () => console.log('Events are fun');
        et.addEventListener('foo', listener);
        console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ]
      }
      
    • emitter: EventEmitter<DefaultEventMap> | EventTarget
      ): number;

      Returns the currently set max amount of listeners.

      For EventEmitters this behaves exactly the same as calling .getMaxListeners on the emitter.

      For EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.

      import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        console.log(getMaxListeners(ee)); // 10
        setMaxListeners(11, ee);
        console.log(getMaxListeners(ee)); // 11
      }
      {
        const et = new EventTarget();
        console.log(getMaxListeners(et)); // 10
        setMaxListeners(11, et);
        console.log(getMaxListeners(et)); // 11
      }
      
    • static on(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

      static on(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

    • static once(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
      static once(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
    • n?: number,
      ...eventTargets: EventEmitter<DefaultEventMap> | EventTarget[]
      ): void;
      import { setMaxListeners, EventEmitter } from 'node:events';
      
      const target = new EventTarget();
      const emitter = new EventEmitter();
      
      setMaxListeners(5, target, emitter);
      
      @param n

      A non-negative number. The maximum number of listeners per EventTarget event.

      @param eventTargets

      Zero or more {EventTarget} or {EventEmitter} instances. If none are specified, n is set as the default max for all newly created {EventTarget} and {EventEmitter} objects.

    • static toWeb(
      streamWritable: Writable

      A utility method for creating a web WritableStream from a Writable.

  • class Verify

    The Verify class is a utility for verifying signatures. It can be used in one of two ways:

    • As a writable stream where written data is used to validate against the supplied signature, or
    • Using the verify.update() and verify.verify() methods to verify the signature.

    The createVerify method is used to create Verify instances. Verify objects are not to be created directly using the new keyword.

    See Sign for examples.

    • readonly closed: boolean

      Is true after 'close' has been emitted.

    • destroyed: boolean

      Is true after writable.destroy() has been called.

    • readonly errored: null | Error

      Returns error if the stream has been destroyed with an error.

    • readonly writable: boolean

      Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

    • readonly writableCorked: number

      Number of times writable.uncork() needs to be called in order to fully uncork the stream.

    • readonly writableEnded: boolean

      Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

    • readonly writableFinished: boolean

      Is set to true immediately before the 'finish' event is emitted.

    • readonly writableHighWaterMark: number

      Return the value of highWaterMark passed when creating this Writable.

    • readonly writableLength: number

      This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

    • readonly writableNeedDrain: boolean

      Is true if the stream's buffer has been full and stream will emit 'drain'.

    • readonly writableObjectMode: boolean

      Getter for the property objectMode of a given Writable stream.

    • static captureRejections: boolean

      Value: boolean

      Change the default captureRejections option on all new EventEmitter objects.

    • readonly static captureRejectionSymbol: typeof captureRejectionSymbol

      Value: Symbol.for('nodejs.rejection')

      See how to write a custom rejection handler.

    • static defaultMaxListeners: number

      By default, a maximum of 10 listeners can be registered for any single event. This limit can be changed for individual EventEmitter instances using the emitter.setMaxListeners(n) method. To change the default for allEventEmitter instances, the events.defaultMaxListeners property can be used. If this value is not a positive number, a RangeError is thrown.

      Take caution when setting the events.defaultMaxListeners because the change affects all EventEmitter instances, including those created before the change is made. However, calling emitter.setMaxListeners(n) still has precedence over events.defaultMaxListeners.

      This is not a hard limit. The EventEmitter instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any single EventEmitter, the emitter.getMaxListeners() and emitter.setMaxListeners() methods can be used to temporarily avoid this warning:

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.setMaxListeners(emitter.getMaxListeners() + 1);
      emitter.once('event', () => {
        // do stuff
        emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
      });
      

      The --trace-warnings command-line flag can be used to display the stack trace for such warnings.

      The emitted warning can be inspected with process.on('warning') and will have the additional emitter, type, and count properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Its name property is set to 'MaxListenersExceededWarning'.

    • readonly static errorMonitor: typeof errorMonitor

      This symbol shall be used to install a listener for only monitoring 'error' events. Listeners installed using this symbol are called before the regular 'error' listeners are called.

      Installing a listener using this symbol does not change the behavior once an 'error' event is emitted. Therefore, the process will still crash if no regular 'error' listener is installed.

    • callback: (error?: null | Error) => void
      ): void;
    • error: null | Error,
      callback: (error?: null | Error) => void
      ): void;
    • callback: (error?: null | Error) => void
      ): void;
    • chunk: any,
      encoding: BufferEncoding,
      callback: (error?: null | Error) => void
      ): void;
    • chunks: { chunk: any; encoding: BufferEncoding }[],
      callback: (error?: null | Error) => void
      ): void;
    • error: Error,
      event: string | symbol,
      ...args: AnyRest
      ): void;
    • event: 'close',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'drain',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'error',
      listener: (err: Error) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'finish',
      listener: () => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Event emitter The defined events on documents including:

      1. close
      2. drain
      3. error
      4. finish
      5. pipe
      6. unpipe
    • compose<T extends ReadableStream>(
      stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
      options?: { signal: AbortSignal }
      ): T;
    • cork(): void;

      The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

      The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

      See also: writable.uncork(), writable._writev().

    • error?: Error
      ): this;

      Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the writable stream has ended and subsequent calls to write() or end() will result in an ERR_STREAM_DESTROYED error. This is a destructive and immediate way to destroy a stream. Previous calls to write() may not have drained, and may trigger an ERR_STREAM_DESTROYED error. Use end() instead of destroy if data should flush before close, or wait for the 'drain' event before destroying the stream.

      Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

      Implementors should not override this method, but instead implement writable._destroy().

      @param error

      Optional, an error to emit with 'error' event.

    • event: 'close'
      ): boolean;

      Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

      Returns true if the event had listeners, false otherwise.

      import { EventEmitter } from 'node:events';
      const myEmitter = new EventEmitter();
      
      // First listener
      myEmitter.on('event', function firstListener() {
        console.log('Helloooo! first listener');
      });
      // Second listener
      myEmitter.on('event', function secondListener(arg1, arg2) {
        console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
      });
      // Third listener
      myEmitter.on('event', function thirdListener(...args) {
        const parameters = args.join(', ');
        console.log(`event with parameters ${parameters} in third listener`);
      });
      
      console.log(myEmitter.listeners('event'));
      
      myEmitter.emit('event', 1, 2, 3, 4, 5);
      
      // Prints:
      // [
      //   [Function: firstListener],
      //   [Function: secondListener],
      //   [Function: thirdListener]
      // ]
      // Helloooo! first listener
      // event with parameters 1, 2 in second listener
      // event with parameters 1, 2, 3, 4, 5 in third listener
      
      event: 'drain'
      ): boolean;
      event: 'error',
      err: Error
      ): boolean;
      event: 'finish'
      ): boolean;
      event: 'pipe',
      ): boolean;
      event: 'unpipe',
      ): boolean;
      event: string | symbol,
      ...args: any[]
      ): boolean;
    • cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      chunk: any,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      chunk: any,
      encoding: BufferEncoding,
      cb?: () => void
      ): this;

      Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

      Calling the write method after calling end will raise an error.

      // Write 'hello, ' and then end with 'world!'.
      import fs from 'node:fs';
      const file = fs.createWriteStream('example.txt');
      file.write('hello, ');
      file.end('world!');
      // Writing more now is not allowed!
      
      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding if chunk is a string

    • eventNames(): string | symbol[];

      Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

      import { EventEmitter } from 'node:events';
      
      const myEE = new EventEmitter();
      myEE.on('foo', () => {});
      myEE.on('bar', () => {});
      
      const sym = Symbol('symbol');
      myEE.on(sym, () => {});
      
      console.log(myEE.eventNames());
      // Prints: [ 'foo', 'bar', Symbol(symbol) ]
      
    • getMaxListeners(): number;

      Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

    • eventName: string | symbol,
      listener?: Function
      ): number;

      Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

      @param eventName

      The name of the event being listened for

      @param listener

      The event handler function

    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      console.log(util.inspect(server.listeners('connection')));
      // Prints: [ [Function] ]
      
    • off<K>(
      eventName: string | symbol,
      listener: (...args: any[]) => void
      ): this;

      Alias for emitter.removeListener().

    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.on('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.on('foo', () => console.log('a'));
      myEE.prependListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

      server.once('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

      import { EventEmitter } from 'node:events';
      const myEE = new EventEmitter();
      myEE.once('foo', () => console.log('a'));
      myEE.prependOnceListener('foo', () => console.log('b'));
      myEE.emit('foo');
      // Prints:
      //   b
      //   a
      
      @param listener

      The callback function

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • pipe<T extends WritableStream>(
      destination: T,
      options?: { end: boolean }
      ): T;
    • event: 'close',
      listener: () => void
      ): this;

      Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

      server.prependListener('connection', (stream) => {
        console.log('someone connected!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • event: 'close',
      listener: () => void
      ): this;

      Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

      server.prependOnceListener('connection', (stream) => {
        console.log('Ah, we have our first user!');
      });
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      @param listener

      The callback function

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • eventName: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

      import { EventEmitter } from 'node:events';
      const emitter = new EventEmitter();
      emitter.once('log', () => console.log('log once'));
      
      // Returns a new Array with a function `onceWrapper` which has a property
      // `listener` which contains the original listener bound above
      const listeners = emitter.rawListeners('log');
      const logFnWrapper = listeners[0];
      
      // Logs "log once" to the console and does not unbind the `once` event
      logFnWrapper.listener();
      
      // Logs "log once" to the console and removes the listener
      logFnWrapper();
      
      emitter.on('log', () => console.log('log persistently'));
      // Will return a new Array with a single function bound by `.on()` above
      const newListeners = emitter.rawListeners('log');
      
      // Logs "log persistently" twice
      newListeners[0]();
      emitter.emit('log');
      
    • eventName?: string | symbol
      ): this;

      Removes all listeners, or those of the specified eventName.

      It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

      Returns a reference to the EventEmitter, so that calls can be chained.

    • event: 'close',
      listener: () => void
      ): this;

      Removes the specified listener from the listener array for the event named eventName.

      const callback = (stream) => {
        console.log('someone connected!');
      };
      server.on('connection', callback);
      // ...
      server.removeListener('connection', callback);
      

      removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

      Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

      import { EventEmitter } from 'node:events';
      class MyEmitter extends EventEmitter {}
      const myEmitter = new MyEmitter();
      
      const callbackA = () => {
        console.log('A');
        myEmitter.removeListener('event', callbackB);
      };
      
      const callbackB = () => {
        console.log('B');
      };
      
      myEmitter.on('event', callbackA);
      
      myEmitter.on('event', callbackB);
      
      // callbackA removes listener callbackB but it will still be called.
      // Internal listener array at time of emit [callbackA, callbackB]
      myEmitter.emit('event');
      // Prints:
      //   A
      //   B
      
      // callbackB is now removed.
      // Internal listener array [callbackA]
      myEmitter.emit('event');
      // Prints:
      //   A
      

      Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

      When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

      import { EventEmitter } from 'node:events';
      const ee = new EventEmitter();
      
      function pong() {
        console.log('pong');
      }
      
      ee.on('ping', pong);
      ee.once('ping', pong);
      ee.removeListener('ping', pong);
      
      ee.emit('ping');
      ee.emit('ping');
      

      Returns a reference to the EventEmitter, so that calls can be chained.

      event: 'drain',
      listener: () => void
      ): this;
      event: 'error',
      listener: (err: Error) => void
      ): this;
      event: 'finish',
      listener: () => void
      ): this;
      event: 'pipe',
      listener: (src: Readable) => void
      ): this;
      event: 'unpipe',
      listener: (src: Readable) => void
      ): this;
      event: string | symbol,
      listener: (...args: any[]) => void
      ): this;
    • encoding: BufferEncoding
      ): this;

      The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

      @param encoding

      The new default encoding

    • n: number
      ): this;

      By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

      Returns a reference to the EventEmitter, so that calls can be chained.

    • uncork(): void;

      The writable.uncork() method flushes all data buffered since cork was called.

      When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

      stream.cork();
      stream.write('some ');
      stream.write('data ');
      process.nextTick(() => stream.uncork());
      

      If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

      stream.cork();
      stream.write('some ');
      stream.cork();
      stream.write('data ');
      process.nextTick(() => {
        stream.uncork();
        // The data will not be flushed until uncork() is called a second time.
        stream.uncork();
      });
      

      See also: writable.cork().

    • ): Verify;

      Updates the Verify content with the given data, the encoding of which is given in inputEncoding. If inputEncoding is not provided, and the data is a string, an encoding of 'utf8' is enforced. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

      This can be called many times with new data as it is streamed.

      data: string,
      inputEncoding: Encoding
      ): Verify;

      Updates the Verify content with the given data, the encoding of which is given in inputEncoding. If inputEncoding is not provided, and the data is a string, an encoding of 'utf8' is enforced. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

      This can be called many times with new data as it is streamed.

      @param inputEncoding

      The encoding of the data string.

    • signature: ArrayBufferView
      ): boolean;

      Verifies the provided data using the given object and signature.

      If object is not a KeyObject, this function behaves as if object had been passed to createPublicKey. If it is an object, the following additional properties can be passed:

      The signature argument is the previously calculated signature for the data, in the signatureEncoding. If a signatureEncoding is specified, the signature is expected to be a string; otherwise signature is expected to be a Buffer, TypedArray, or DataView.

      The verify object can not be used again after verify.verify() has been called. Multiple calls to verify.verify() will result in an error being thrown.

      Because public keys can be derived from private keys, a private key may be passed instead of a public key.

      signature: string,
      signature_format?: BinaryToTextEncoding
      ): boolean;

      Verifies the provided data using the given object and signature.

      If object is not a KeyObject, this function behaves as if object had been passed to createPublicKey. If it is an object, the following additional properties can be passed:

      The signature argument is the previously calculated signature for the data, in the signatureEncoding. If a signatureEncoding is specified, the signature is expected to be a string; otherwise signature is expected to be a Buffer, TypedArray, or DataView.

      The verify object can not be used again after verify.verify() has been called. Multiple calls to verify.verify() will result in an error being thrown.

      Because public keys can be derived from private keys, a private key may be passed instead of a public key.

    • chunk: any,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

      chunk: any,
      encoding: BufferEncoding,
      callback?: (error: undefined | null | Error) => void
      ): boolean;

      The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

      The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

      While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

      Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

      If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

      function write(data, cb) {
        if (!stream.write(data)) {
          stream.once('drain', cb);
        } else {
          process.nextTick(cb);
        }
      }
      
      // Wait for cb to be called before doing any other write.
      write('hello', () => {
        console.log('Write completed, do more writes now.');
      });
      

      A Writable stream in object mode will always ignore the encoding argument.

      @param chunk

      Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

      @param encoding

      The encoding, if chunk is a string.

      @param callback

      Callback for when this chunk of data is flushed.

      @returns

      false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • signal: AbortSignal,
      resource: (event: Event) => void
      ): Disposable;

      Listens once to the abort event on the provided signal.

      Listening to the abort event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can call e.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.

      This API allows safely using AbortSignals in Node.js APIs by solving these two issues by listening to the event such that stopImmediatePropagation does not prevent the listener from running.

      Returns a disposable so that it may be unsubscribed from more easily.

      import { addAbortListener } from 'node:events';
      
      function example(signal) {
        let disposable;
        try {
          signal.addEventListener('abort', (e) => e.stopImmediatePropagation());
          disposable = addAbortListener(signal, (e) => {
            // Do something when signal is aborted.
          });
        } finally {
          disposable?.[Symbol.dispose]();
        }
      }
      
      @returns

      Disposable that removes the abort listener.

    • static fromWeb(
      writableStream: WritableStream,
      options?: Pick<WritableOptions<Writable>, 'signal' | 'decodeStrings' | 'highWaterMark' | 'objectMode'>

      A utility method for creating a Writable from a web WritableStream.

    • emitter: EventEmitter<DefaultEventMap> | EventTarget,
      name: string | symbol
      ): Function[];

      Returns a copy of the array of listeners for the event named eventName.

      For EventEmitters this behaves exactly the same as calling .listeners on the emitter.

      For EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.

      import { getEventListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        const listener = () => console.log('Events are fun');
        ee.on('foo', listener);
        console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ]
      }
      {
        const et = new EventTarget();
        const listener = () => console.log('Events are fun');
        et.addEventListener('foo', listener);
        console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ]
      }
      
    • emitter: EventEmitter<DefaultEventMap> | EventTarget
      ): number;

      Returns the currently set max amount of listeners.

      For EventEmitters this behaves exactly the same as calling .getMaxListeners on the emitter.

      For EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.

      import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events';
      
      {
        const ee = new EventEmitter();
        console.log(getMaxListeners(ee)); // 10
        setMaxListeners(11, ee);
        console.log(getMaxListeners(ee)); // 11
      }
      {
        const et = new EventTarget();
        console.log(getMaxListeners(et)); // 10
        setMaxListeners(11, et);
        console.log(getMaxListeners(et)); // 11
      }
      
    • static on(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

      static on(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterIteratorOptions
      ): AsyncIterator<any[]>;
      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
      });
      
      for await (const event of on(ee, 'foo')) {
        // The execution of this inner block is synchronous and it
        // processes one event at a time (even with await). Do not use
        // if concurrent execution is required.
        console.log(event); // prints ['bar'] [42]
      }
      // Unreachable here
      

      Returns an AsyncIterator that iterates eventName events. It will throw if the EventEmitter emits 'error'. It removes all listeners when exiting the loop. The value returned by each iteration is an array composed of the emitted event arguments.

      An AbortSignal can be used to cancel waiting on events:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ac = new AbortController();
      
      (async () => {
        const ee = new EventEmitter();
      
        // Emit later on
        process.nextTick(() => {
          ee.emit('foo', 'bar');
          ee.emit('foo', 42);
        });
      
        for await (const event of on(ee, 'foo', { signal: ac.signal })) {
          // The execution of this inner block is synchronous and it
          // processes one event at a time (even with await). Do not use
          // if concurrent execution is required.
          console.log(event); // prints ['bar'] [42]
        }
        // Unreachable here
      })();
      
      process.nextTick(() => ac.abort());
      

      Use the close option to specify an array of event names that will end the iteration:

      import { on, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      // Emit later on
      process.nextTick(() => {
        ee.emit('foo', 'bar');
        ee.emit('foo', 42);
        ee.emit('close');
      });
      
      for await (const event of on(ee, 'foo', { close: ['close'] })) {
        console.log(event); // prints ['bar'] [42]
      }
      // the loop will exit after 'close' is emitted
      console.log('done'); // prints 'done'
      
      @returns

      An AsyncIterator that iterates eventName events emitted by the emitter

    • static once(
      emitter: EventEmitter,
      eventName: string | symbol,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
      static once(
      emitter: EventTarget,
      eventName: string,
      options?: StaticEventEmitterOptions
      ): Promise<any[]>;

      Creates a Promise that is fulfilled when the EventEmitter emits the given event or that is rejected if the EventEmitter emits 'error' while waiting. The Promise will resolve with an array of all the arguments emitted to the given event.

      This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error' event semantics and does not listen to the 'error' event.

      import { once, EventEmitter } from 'node:events';
      import process from 'node:process';
      
      const ee = new EventEmitter();
      
      process.nextTick(() => {
        ee.emit('myevent', 42);
      });
      
      const [value] = await once(ee, 'myevent');
      console.log(value);
      
      const err = new Error('kaboom');
      process.nextTick(() => {
        ee.emit('error', err);
      });
      
      try {
        await once(ee, 'myevent');
      } catch (err) {
        console.error('error happened', err);
      }
      

      The special handling of the 'error' event is only used when events.once() is used to wait for another event. If events.once() is used to wait for the 'error' event itself, then it is treated as any other kind of event without special handling:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      
      once(ee, 'error')
        .then(([err]) => console.log('ok', err.message))
        .catch((err) => console.error('error', err.message));
      
      ee.emit('error', new Error('boom'));
      
      // Prints: ok boom
      

      An AbortSignal can be used to cancel waiting for the event:

      import { EventEmitter, once } from 'node:events';
      
      const ee = new EventEmitter();
      const ac = new AbortController();
      
      async function foo(emitter, event, signal) {
        try {
          await once(emitter, event, { signal });
          console.log('event emitted!');
        } catch (error) {
          if (error.name === 'AbortError') {
            console.error('Waiting for the event was canceled!');
          } else {
            console.error('There was an error', error.message);
          }
        }
      }
      
      foo(ee, 'foo', ac.signal);
      ac.abort(); // Abort waiting for the event
      ee.emit('foo'); // Prints: Waiting for the event was canceled!
      
    • n?: number,
      ...eventTargets: EventEmitter<DefaultEventMap> | EventTarget[]
      ): void;
      import { setMaxListeners, EventEmitter } from 'node:events';
      
      const target = new EventTarget();
      const emitter = new EventEmitter();
      
      setMaxListeners(5, target, emitter);
      
      @param n

      A non-negative number. The maximum number of listeners per EventTarget event.

      @param eventTargets

      Zero or more {EventTarget} or {EventEmitter} instances. If none are specified, n is set as the default max for all newly created {EventTarget} and {EventEmitter} objects.

    • static toWeb(
      streamWritable: Writable

      A utility method for creating a web WritableStream from a Writable.

  • class X509Certificate

    Encapsulates an X509 certificate and provides read-only access to its information.

    const { X509Certificate } = await import('node:crypto');
    
    const x509 = new X509Certificate('{... pem encoded cert ...}');
    
    console.log(x509.subject);
    
    • readonly ca: boolean

      Will be true if this is a Certificate Authority (CA) certificate.

    • readonly fingerprint: string

      The SHA-1 fingerprint of this certificate.

      Because SHA-1 is cryptographically broken and because the security of SHA-1 is significantly worse than that of algorithms that are commonly used to sign certificates, consider using x509.fingerprint256 instead.

    • readonly fingerprint256: string

      The SHA-256 fingerprint of this certificate.

    • readonly fingerprint512: string

      The SHA-512 fingerprint of this certificate.

      Because computing the SHA-256 fingerprint is usually faster and because it is only half the size of the SHA-512 fingerprint, x509.fingerprint256 may be a better choice. While SHA-512 presumably provides a higher level of security in general, the security of SHA-256 matches that of most algorithms that are commonly used to sign certificates.

    • readonly infoAccess: undefined | string

      A textual representation of the certificate's authority information access extension.

      This is a line feed separated list of access descriptions. Each line begins with the access method and the kind of the access location, followed by a colon and the value associated with the access location.

      After the prefix denoting the access method and the kind of the access location, the remainder of each line might be enclosed in quotes to indicate that the value is a JSON string literal. For backward compatibility, Node.js only uses JSON string literals within this property when necessary to avoid ambiguity. Third-party code should be prepared to handle both possible entry formats.

    • readonly issuer: string

      The issuer identification included in this certificate.

    • readonly issuerCertificate?: X509Certificate

      The issuer certificate or undefined if the issuer certificate is not available.

    • readonly keyUsage: string[]

      An array detailing the key usages for this certificate.

    • readonly publicKey: KeyObject

      The public key KeyObject for this certificate.

    • readonly raw: Buffer

      A Buffer containing the DER encoding of this certificate.

    • readonly serialNumber: string

      The serial number of this certificate.

      Serial numbers are assigned by certificate authorities and do not uniquely identify certificates. Consider using x509.fingerprint256 as a unique identifier instead.

    • readonly subject: string

      The complete subject of this certificate.

    • readonly subjectAltName: undefined | string

      The subject alternative name specified for this certificate.

      This is a comma-separated list of subject alternative names. Each entry begins with a string identifying the kind of the subject alternative name followed by a colon and the value associated with the entry.

      Earlier versions of Node.js incorrectly assumed that it is safe to split this property at the two-character sequence ', ' (see CVE-2021-44532). However, both malicious and legitimate certificates can contain subject alternative names that include this sequence when represented as a string.

      After the prefix denoting the type of the entry, the remainder of each entry might be enclosed in quotes to indicate that the value is a JSON string literal. For backward compatibility, Node.js only uses JSON string literals within this property when necessary to avoid ambiguity. Third-party code should be prepared to handle both possible entry formats.

    • readonly validFrom: string

      The date/time from which this certificate is considered valid.

    • readonly validFromDate: Date

      The date/time from which this certificate is valid, encapsulated in a Date object.

    • readonly validTo: string

      The date/time until which this certificate is considered valid.

    • readonly validToDate: Date

      The date/time until which this certificate is valid, encapsulated in a Date object.

    • email: string,
      options?: Pick<X509CheckOptions, 'subject'>
      ): undefined | string;

      Checks whether the certificate matches the given email address.

      If the 'subject' option is undefined or set to 'default', the certificate subject is only considered if the subject alternative name extension either does not exist or does not contain any email addresses.

      If the 'subject' option is set to 'always' and if the subject alternative name extension either does not exist or does not contain a matching email address, the certificate subject is considered.

      If the 'subject' option is set to 'never', the certificate subject is never considered, even if the certificate contains no subject alternative names.

      @returns

      Returns email if the certificate matches, undefined if it does not.

    • name: string,
      ): undefined | string;

      Checks whether the certificate matches the given host name.

      If the certificate matches the given host name, the matching subject name is returned. The returned name might be an exact match (e.g., foo.example.com) or it might contain wildcards (e.g., *.example.com). Because host name comparisons are case-insensitive, the returned subject name might also differ from the given name in capitalization.

      If the 'subject' option is undefined or set to 'default', the certificate subject is only considered if the subject alternative name extension either does not exist or does not contain any DNS names. This behavior is consistent with RFC 2818 ("HTTP Over TLS").

      If the 'subject' option is set to 'always' and if the subject alternative name extension either does not exist or does not contain a matching DNS name, the certificate subject is considered.

      If the 'subject' option is set to 'never', the certificate subject is never considered, even if the certificate contains no subject alternative names.

      @returns

      Returns a subject name that matches name, or undefined if no subject name matches name.

    • ip: string
      ): undefined | string;

      Checks whether the certificate matches the given IP address (IPv4 or IPv6).

      Only RFC 5280 iPAddress subject alternative names are considered, and they must match the given ip address exactly. Other subject alternative names as well as the subject field of the certificate are ignored.

      @returns

      Returns ip if the certificate matches, undefined if it does not.

    • otherCert: X509Certificate
      ): boolean;

      Checks whether this certificate was issued by the given otherCert.

    • privateKey: KeyObject
      ): boolean;

      Checks whether the public key for this certificate is consistent with the given private key.

      @param privateKey

      A private key.

    • toJSON(): string;

      There is no standard JSON encoding for X509 certificates. ThetoJSON() method returns a string containing the PEM encoded certificate.

    • Returns information about this certificate using the legacy certificate object encoding.

    • toString(): string;

      Returns the PEM-encoded certificate.

    • publicKey: KeyObject
      ): boolean;

      Verifies that this certificate was signed by the given public key. Does not perform any other validation checks on the certificate.

      @param publicKey

      A public key.

  • The DiffieHellmanGroup class takes a well-known modp group as its argument. It works the same as DiffieHellman, except that it does not allow changing its keys after creation. In other words, it does not implement setPublicKey() or setPrivateKey() methods.

    const { createDiffieHellmanGroup } = await import('node:crypto');
    const dh = createDiffieHellmanGroup('modp1');
    

    The name (e.g. 'modp1') is taken from RFC 2412 (modp1 and 2) and RFC 3526:

    perl -ne 'print "$1\n" if /"(modp\d+)"/' src/node_crypto_groups.h
    modp1  #  768 bits
    modp2  # 1024 bits
    modp5  # 1536 bits
    modp14 # 2048 bits
    modp15 # etc.
    modp16
    modp17
    modp18
  • A convenient alias for crypto.webcrypto.subtle.

  • An implementation of the Web Crypto API standard.

    See the Web Crypto API documentation for details.

  • function checkPrime(
    callback: (err: null | Error, result: boolean) => void
    ): void;

    Checks the primality of the candidate.

    function checkPrime(
    callback: (err: null | Error, result: boolean) => void
    ): void;

    Checks the primality of the candidate.

  • function checkPrimeSync(
    candidate: LargeNumberLike,
    ): boolean;

    Checks the primality of the candidate.

    @param candidate

    A possible prime encoded as a sequence of big endian octets of arbitrary length.

    @returns

    true if the candidate is a prime with an error probability less than 0.25 ** options.checks.

  • function createCipheriv(
    algorithm: CipherCCMTypes,
    key: CipherKey,

    Creates and returns a Cipher object, with the given algorithm, key and initialization vector (iv).

    The options argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g. 'aes-128-ccm') is used. In that case, theauthTagLength option is required and specifies the length of the authentication tag in bytes, see CCM mode. In GCM mode, the authTagLengthoption is not required but can be used to set the length of the authentication tag that will be returned by getAuthTag() and defaults to 16 bytes. For chacha20-poly1305, the authTagLength option defaults to 16 bytes.

    The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list -cipher-algorithms will display the available cipher algorithms.

    The key is the raw key used by the algorithm and iv is an initialization vector. Both arguments must be 'utf8' encoded strings,Buffers, TypedArray, or DataViews. The key may optionally be a KeyObject of type secret. If the cipher does not need an initialization vector, iv may be null.

    When passing strings for key or iv, please consider caveats when using strings as inputs to cryptographic APIs.

    Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.

    @param options

    stream.transform options

    function createCipheriv(
    algorithm: CipherOCBTypes,
    key: CipherKey,

    Creates and returns a Cipher object, with the given algorithm, key and initialization vector (iv).

    The options argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g. 'aes-128-ccm') is used. In that case, theauthTagLength option is required and specifies the length of the authentication tag in bytes, see CCM mode. In GCM mode, the authTagLengthoption is not required but can be used to set the length of the authentication tag that will be returned by getAuthTag() and defaults to 16 bytes. For chacha20-poly1305, the authTagLength option defaults to 16 bytes.

    The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list -cipher-algorithms will display the available cipher algorithms.

    The key is the raw key used by the algorithm and iv is an initialization vector. Both arguments must be 'utf8' encoded strings,Buffers, TypedArray, or DataViews. The key may optionally be a KeyObject of type secret. If the cipher does not need an initialization vector, iv may be null.

    When passing strings for key or iv, please consider caveats when using strings as inputs to cryptographic APIs.

    Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.

    @param options

    stream.transform options

    function createCipheriv(
    algorithm: CipherGCMTypes,
    key: CipherKey,

    Creates and returns a Cipher object, with the given algorithm, key and initialization vector (iv).

    The options argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g. 'aes-128-ccm') is used. In that case, theauthTagLength option is required and specifies the length of the authentication tag in bytes, see CCM mode. In GCM mode, the authTagLengthoption is not required but can be used to set the length of the authentication tag that will be returned by getAuthTag() and defaults to 16 bytes. For chacha20-poly1305, the authTagLength option defaults to 16 bytes.

    The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list -cipher-algorithms will display the available cipher algorithms.

    The key is the raw key used by the algorithm and iv is an initialization vector. Both arguments must be 'utf8' encoded strings,Buffers, TypedArray, or DataViews. The key may optionally be a KeyObject of type secret. If the cipher does not need an initialization vector, iv may be null.

    When passing strings for key or iv, please consider caveats when using strings as inputs to cryptographic APIs.

    Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.

    @param options

    stream.transform options

    function createCipheriv(
    algorithm: 'chacha20-poly1305',
    key: CipherKey,

    Creates and returns a Cipher object, with the given algorithm, key and initialization vector (iv).

    The options argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g. 'aes-128-ccm') is used. In that case, theauthTagLength option is required and specifies the length of the authentication tag in bytes, see CCM mode. In GCM mode, the authTagLengthoption is not required but can be used to set the length of the authentication tag that will be returned by getAuthTag() and defaults to 16 bytes. For chacha20-poly1305, the authTagLength option defaults to 16 bytes.

    The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list -cipher-algorithms will display the available cipher algorithms.

    The key is the raw key used by the algorithm and iv is an initialization vector. Both arguments must be 'utf8' encoded strings,Buffers, TypedArray, or DataViews. The key may optionally be a KeyObject of type secret. If the cipher does not need an initialization vector, iv may be null.

    When passing strings for key or iv, please consider caveats when using strings as inputs to cryptographic APIs.

    Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.

    @param options

    stream.transform options

    function createCipheriv(
    algorithm: string,
    key: CipherKey,
    iv: null | BinaryLike,
    ): Cipher;

    Creates and returns a Cipher object, with the given algorithm, key and initialization vector (iv).

    The options argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g. 'aes-128-ccm') is used. In that case, theauthTagLength option is required and specifies the length of the authentication tag in bytes, see CCM mode. In GCM mode, the authTagLengthoption is not required but can be used to set the length of the authentication tag that will be returned by getAuthTag() and defaults to 16 bytes. For chacha20-poly1305, the authTagLength option defaults to 16 bytes.

    The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list -cipher-algorithms will display the available cipher algorithms.

    The key is the raw key used by the algorithm and iv is an initialization vector. Both arguments must be 'utf8' encoded strings,Buffers, TypedArray, or DataViews. The key may optionally be a KeyObject of type secret. If the cipher does not need an initialization vector, iv may be null.

    When passing strings for key or iv, please consider caveats when using strings as inputs to cryptographic APIs.

    Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.

    @param options

    stream.transform options

  • algorithm: CipherCCMTypes,
    key: CipherKey,

    Creates and returns a Decipher object that uses the given algorithm, key and initialization vector (iv).

    The options argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g. 'aes-128-ccm') is used. In that case, the authTagLength option is required and specifies the length of the authentication tag in bytes, see CCM mode. In GCM mode, the authTagLength option is not required but can be used to restrict accepted authentication tags to those with the specified length. For chacha20-poly1305, the authTagLength option defaults to 16 bytes.

    The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list -cipher-algorithms will display the available cipher algorithms.

    The key is the raw key used by the algorithm and iv is an initialization vector. Both arguments must be 'utf8' encoded strings,Buffers, TypedArray, or DataViews. The key may optionally be a KeyObject of type secret. If the cipher does not need an initialization vector, iv may be null.

    When passing strings for key or iv, please consider caveats when using strings as inputs to cryptographic APIs.

    Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.

    @param options

    stream.transform options

    algorithm: CipherOCBTypes,
    key: CipherKey,

    Creates and returns a Decipher object that uses the given algorithm, key and initialization vector (iv).

    The options argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g. 'aes-128-ccm') is used. In that case, the authTagLength option is required and specifies the length of the authentication tag in bytes, see CCM mode. In GCM mode, the authTagLength option is not required but can be used to restrict accepted authentication tags to those with the specified length. For chacha20-poly1305, the authTagLength option defaults to 16 bytes.

    The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list -cipher-algorithms will display the available cipher algorithms.

    The key is the raw key used by the algorithm and iv is an initialization vector. Both arguments must be 'utf8' encoded strings,Buffers, TypedArray, or DataViews. The key may optionally be a KeyObject of type secret. If the cipher does not need an initialization vector, iv may be null.

    When passing strings for key or iv, please consider caveats when using strings as inputs to cryptographic APIs.

    Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.

    @param options

    stream.transform options

    algorithm: CipherGCMTypes,
    key: CipherKey,

    Creates and returns a Decipher object that uses the given algorithm, key and initialization vector (iv).

    The options argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g. 'aes-128-ccm') is used. In that case, the authTagLength option is required and specifies the length of the authentication tag in bytes, see CCM mode. In GCM mode, the authTagLength option is not required but can be used to restrict accepted authentication tags to those with the specified length. For chacha20-poly1305, the authTagLength option defaults to 16 bytes.

    The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list -cipher-algorithms will display the available cipher algorithms.

    The key is the raw key used by the algorithm and iv is an initialization vector. Both arguments must be 'utf8' encoded strings,Buffers, TypedArray, or DataViews. The key may optionally be a KeyObject of type secret. If the cipher does not need an initialization vector, iv may be null.

    When passing strings for key or iv, please consider caveats when using strings as inputs to cryptographic APIs.

    Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.

    @param options

    stream.transform options

    algorithm: 'chacha20-poly1305',
    key: CipherKey,

    Creates and returns a Decipher object that uses the given algorithm, key and initialization vector (iv).

    The options argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g. 'aes-128-ccm') is used. In that case, the authTagLength option is required and specifies the length of the authentication tag in bytes, see CCM mode. In GCM mode, the authTagLength option is not required but can be used to restrict accepted authentication tags to those with the specified length. For chacha20-poly1305, the authTagLength option defaults to 16 bytes.

    The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list -cipher-algorithms will display the available cipher algorithms.

    The key is the raw key used by the algorithm and iv is an initialization vector. Both arguments must be 'utf8' encoded strings,Buffers, TypedArray, or DataViews. The key may optionally be a KeyObject of type secret. If the cipher does not need an initialization vector, iv may be null.

    When passing strings for key or iv, please consider caveats when using strings as inputs to cryptographic APIs.

    Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.

    @param options

    stream.transform options

    algorithm: string,
    key: CipherKey,
    iv: null | BinaryLike,

    Creates and returns a Decipher object that uses the given algorithm, key and initialization vector (iv).

    The options argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g. 'aes-128-ccm') is used. In that case, the authTagLength option is required and specifies the length of the authentication tag in bytes, see CCM mode. In GCM mode, the authTagLength option is not required but can be used to restrict accepted authentication tags to those with the specified length. For chacha20-poly1305, the authTagLength option defaults to 16 bytes.

    The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On recent OpenSSL releases, openssl list -cipher-algorithms will display the available cipher algorithms.

    The key is the raw key used by the algorithm and iv is an initialization vector. Both arguments must be 'utf8' encoded strings,Buffers, TypedArray, or DataViews. The key may optionally be a KeyObject of type secret. If the cipher does not need an initialization vector, iv may be null.

    When passing strings for key or iv, please consider caveats when using strings as inputs to cryptographic APIs.

    Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.

    @param options

    stream.transform options

  • primeLength: number,
    generator?: number

    Creates a DiffieHellman key exchange object using the supplied prime and an optional specific generator.

    The generator argument can be a number, string, or Buffer. If generator is not specified, the value 2 is used.

    If primeEncoding is specified, prime is expected to be a string; otherwise a Buffer, TypedArray, or DataView is expected.

    If generatorEncoding is specified, generator is expected to be a string; otherwise a number, Buffer, TypedArray, or DataView is expected.

    prime: ArrayBuffer | ArrayBufferView<ArrayBufferLike>,
    generator?: number | ArrayBuffer | ArrayBufferView<ArrayBufferLike>

    Creates a DiffieHellman key exchange object using the supplied prime and an optional specific generator.

    The generator argument can be a number, string, or Buffer. If generator is not specified, the value 2 is used.

    If primeEncoding is specified, prime is expected to be a string; otherwise a Buffer, TypedArray, or DataView is expected.

    If generatorEncoding is specified, generator is expected to be a string; otherwise a number, Buffer, TypedArray, or DataView is expected.

    prime: ArrayBuffer | ArrayBufferView<ArrayBufferLike>,
    generator: string,
    generatorEncoding: BinaryToTextEncoding

    Creates a DiffieHellman key exchange object using the supplied prime and an optional specific generator.

    The generator argument can be a number, string, or Buffer. If generator is not specified, the value 2 is used.

    If primeEncoding is specified, prime is expected to be a string; otherwise a Buffer, TypedArray, or DataView is expected.

    If generatorEncoding is specified, generator is expected to be a string; otherwise a number, Buffer, TypedArray, or DataView is expected.

    @param generatorEncoding

    The encoding of the generator string.

    prime: string,
    primeEncoding: BinaryToTextEncoding,
    generator?: number | ArrayBuffer | ArrayBufferView<ArrayBufferLike>

    Creates a DiffieHellman key exchange object using the supplied prime and an optional specific generator.

    The generator argument can be a number, string, or Buffer. If generator is not specified, the value 2 is used.

    If primeEncoding is specified, prime is expected to be a string; otherwise a Buffer, TypedArray, or DataView is expected.

    If generatorEncoding is specified, generator is expected to be a string; otherwise a number, Buffer, TypedArray, or DataView is expected.

    @param primeEncoding

    The encoding of the prime string.

    prime: string,
    primeEncoding: BinaryToTextEncoding,
    generator: string,
    generatorEncoding: BinaryToTextEncoding

    Creates a DiffieHellman key exchange object using the supplied prime and an optional specific generator.

    The generator argument can be a number, string, or Buffer. If generator is not specified, the value 2 is used.

    If primeEncoding is specified, prime is expected to be a string; otherwise a Buffer, TypedArray, or DataView is expected.

    If generatorEncoding is specified, generator is expected to be a string; otherwise a number, Buffer, TypedArray, or DataView is expected.

    @param primeEncoding

    The encoding of the prime string.

    @param generatorEncoding

    The encoding of the generator string.

  • name: string

    An alias for getDiffieHellman

  • function createECDH(
    curveName: string
    ): ECDH;

    Creates an Elliptic Curve Diffie-Hellman (ECDH) key exchange object using a predefined curve specified by the curveName string. Use getCurves to obtain a list of available curve names. On recent OpenSSL releases, openssl ecparam -list_curves will also display the name and description of each available elliptic curve.

  • function createHash(
    algorithm: string,
    options?: HashOptions
    ): Hash;

    Creates and returns a Hash object that can be used to generate hash digests using the given algorithm. Optional options argument controls stream behavior. For XOF hash functions such as 'shake256', the outputLength option can be used to specify the desired output length in bytes.

    The algorithm is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are 'sha256', 'sha512', etc. On recent releases of OpenSSL, openssl list -digest-algorithms will display the available digest algorithms.

    Example: generating the sha256 sum of a file

    import {
      createReadStream,
    } from 'node:fs';
    import { argv } from 'node:process';
    const {
      createHash,
    } = await import('node:crypto');
    
    const filename = argv[2];
    
    const hash = createHash('sha256');
    
    const input = createReadStream(filename);
    input.on('readable', () => {
      // Only one element is going to be produced by the
      // hash stream.
      const data = input.read();
      if (data)
        hash.update(data);
      else {
        console.log(`${hash.digest('hex')} ${filename}`);
      }
    });
    
    @param options

    stream.transform options

  • function createHmac(
    algorithm: string,
    ): Hmac;

    Creates and returns an Hmac object that uses the given algorithm and key. Optional options argument controls stream behavior.

    The algorithm is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are 'sha256', 'sha512', etc. On recent releases of OpenSSL, openssl list -digest-algorithms will display the available digest algorithms.

    The key is the HMAC key used to generate the cryptographic HMAC hash. If it is a KeyObject, its type must be secret. If it is a string, please consider caveats when using strings as inputs to cryptographic APIs. If it was obtained from a cryptographically secure source of entropy, such as randomBytes or generateKey, its length should not exceed the block size of algorithm (e.g., 512 bits for SHA-256).

    Example: generating the sha256 HMAC of a file

    import {
      createReadStream,
    } from 'node:fs';
    import { argv } from 'node:process';
    const {
      createHmac,
    } = await import('node:crypto');
    
    const filename = argv[2];
    
    const hmac = createHmac('sha256', 'a secret');
    
    const input = createReadStream(filename);
    input.on('readable', () => {
      // Only one element is going to be produced by the
      // hash stream.
      const data = input.read();
      if (data)
        hmac.update(data);
      else {
        console.log(`${hmac.digest('hex')} ${filename}`);
      }
    });
    
    @param options

    stream.transform options

  • key: string | Buffer<ArrayBufferLike> | PrivateKeyInput | JsonWebKeyInput

    Creates and returns a new key object containing a private key. If key is a string or Buffer, format is assumed to be 'pem'; otherwise, key must be an object with the properties described above.

    If the private key is encrypted, a passphrase must be specified. The length of the passphrase is limited to 1024 bytes.

  • function createPublicKey(
    key: string | Buffer<ArrayBufferLike> | KeyObject | JsonWebKeyInput | PublicKeyInput

    Creates and returns a new key object containing a public key. If key is a string or Buffer, format is assumed to be 'pem'; if key is a KeyObject with type 'private', the public key is derived from the given private key; otherwise, key must be an object with the properties described above.

    If the format is 'pem', the 'key' may also be an X.509 certificate.

    Because public keys can be derived from private keys, a private key may be passed instead of a public key. In that case, this function behaves as if createPrivateKey had been called, except that the type of the returned KeyObject will be 'public' and that the private key cannot be extracted from the returned KeyObject. Similarly, if a KeyObject with type 'private' is given, a new KeyObject with type 'public' will be returned and it will be impossible to extract the private key from the returned object.

  • function createSecretKey(
    key: ArrayBufferView

    Creates and returns a new key object containing a secret key for symmetric encryption or Hmac.

    function createSecretKey(
    key: string,
    encoding: BufferEncoding

    Creates and returns a new key object containing a secret key for symmetric encryption or Hmac.

    @param encoding

    The string encoding when key is a string.

  • function createSign(
    algorithm: string,
    ): Sign;

    Creates and returns a Sign object that uses the given algorithm. Use getHashes to obtain the names of the available digest algorithms. Optional options argument controls the stream.Writable behavior.

    In some cases, a Sign instance can be created using the name of a signature algorithm, such as 'RSA-SHA256', instead of a digest algorithm. This will use the corresponding digest algorithm. This does not work for all signature algorithms, such as 'ecdsa-with-SHA256', so it is best to always use digest algorithm names.

    @param options

    stream.Writable options

  • function createVerify(
    algorithm: string,
    ): Verify;

    Creates and returns a Verify object that uses the given algorithm. Use getHashes to obtain an array of names of the available signing algorithms. Optional options argument controls the stream.Writable behavior.

    In some cases, a Verify instance can be created using the name of a signature algorithm, such as 'RSA-SHA256', instead of a digest algorithm. This will use the corresponding digest algorithm. This does not work for all signature algorithms, such as 'ecdsa-with-SHA256', so it is best to always use digest algorithm names.

    @param options

    stream.Writable options

  • function diffieHellman(
    options: { privateKey: KeyObject; publicKey: KeyObject }
    ): Buffer;

    Computes the Diffie-Hellman secret based on a privateKey and a publicKey. Both keys must have the same asymmetricKeyType, which must be one of 'dh' (for Diffie-Hellman), 'ec' (for ECDH), 'x448', or 'x25519' (for ECDH-ES).

  • function generateKey(
    type: 'hmac' | 'aes',
    options: { length: number },
    callback: (err: null | Error, key: KeyObject) => void
    ): void;

    Asynchronously generates a new random secret key of the given length. The type will determine which validations will be performed on the length.

    const {
      generateKey,
    } = await import('node:crypto');
    
    generateKey('hmac', { length: 512 }, (err, key) => {
      if (err) throw err;
      console.log(key.export().toString('hex'));  // 46e..........620
    });
    

    The size of a generated HMAC key should not exceed the block size of the underlying hash function. See createHmac for more information.

    @param type

    The intended use of the generated secret key. Currently accepted values are 'hmac' and 'aes'.

  • function generateKeyPair(
    type: 'rsa',
    options: RSAKeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa',
    options: RSAKeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa',
    options: RSAKeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa',
    options: RSAKeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa',
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa-pss',
    options: RSAPSSKeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa-pss',
    options: RSAPSSKeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa-pss',
    options: RSAPSSKeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa-pss',
    options: RSAPSSKeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa-pss',
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'dsa',
    options: DSAKeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'dsa',
    options: DSAKeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'dsa',
    options: DSAKeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'dsa',
    options: DSAKeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'dsa',
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ec',
    options: ECKeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ec',
    options: ECKeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ec',
    options: ECKeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ec',
    options: ECKeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ec',
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed25519',
    options: ED25519KeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed25519',
    options: ED25519KeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed25519',
    options: ED25519KeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed25519',
    options: ED25519KeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed25519',
    options: undefined | ED25519KeyPairKeyObjectOptions,
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed448',
    options: ED448KeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed448',
    options: ED448KeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed448',
    options: ED448KeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed448',
    options: ED448KeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed448',
    options: undefined | ED448KeyPairKeyObjectOptions,
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x25519',
    options: X25519KeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x25519',
    options: X25519KeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x25519',
    options: X25519KeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x25519',
    options: X25519KeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x25519',
    options: undefined | X25519KeyPairKeyObjectOptions,
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x448',
    options: X448KeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x448',
    options: X448KeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x448',
    options: X448KeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x448',
    options: X448KeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x448',
    options: undefined | X448KeyPairKeyObjectOptions,
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

  • type: 'rsa',
    options: RSAKeyPairOptions<'pem', 'pem'>
    ): KeyPairSyncResult<string, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'rsa',
    options: RSAKeyPairOptions<'pem', 'der'>
    ): KeyPairSyncResult<string, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'rsa',
    options: RSAKeyPairOptions<'der', 'pem'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'rsa',
    options: RSAKeyPairOptions<'der', 'der'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'rsa',

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'rsa-pss',
    options: RSAPSSKeyPairOptions<'pem', 'pem'>
    ): KeyPairSyncResult<string, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'rsa-pss',
    options: RSAPSSKeyPairOptions<'pem', 'der'>
    ): KeyPairSyncResult<string, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'rsa-pss',
    options: RSAPSSKeyPairOptions<'der', 'pem'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'rsa-pss',
    options: RSAPSSKeyPairOptions<'der', 'der'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'rsa-pss',

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'dsa',
    options: DSAKeyPairOptions<'pem', 'pem'>
    ): KeyPairSyncResult<string, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'dsa',
    options: DSAKeyPairOptions<'pem', 'der'>
    ): KeyPairSyncResult<string, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'dsa',
    options: DSAKeyPairOptions<'der', 'pem'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'dsa',
    options: DSAKeyPairOptions<'der', 'der'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'dsa',

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ec',
    options: ECKeyPairOptions<'pem', 'pem'>
    ): KeyPairSyncResult<string, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ec',
    options: ECKeyPairOptions<'pem', 'der'>
    ): KeyPairSyncResult<string, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ec',
    options: ECKeyPairOptions<'der', 'pem'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ec',
    options: ECKeyPairOptions<'der', 'der'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ec',

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ed25519',
    options: ED25519KeyPairOptions<'pem', 'pem'>
    ): KeyPairSyncResult<string, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ed25519',
    options: ED25519KeyPairOptions<'pem', 'der'>
    ): KeyPairSyncResult<string, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ed25519',
    options: ED25519KeyPairOptions<'der', 'pem'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ed25519',
    options: ED25519KeyPairOptions<'der', 'der'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ed25519',

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ed448',
    options: ED448KeyPairOptions<'pem', 'pem'>
    ): KeyPairSyncResult<string, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ed448',
    options: ED448KeyPairOptions<'pem', 'der'>
    ): KeyPairSyncResult<string, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ed448',
    options: ED448KeyPairOptions<'der', 'pem'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ed448',
    options: ED448KeyPairOptions<'der', 'der'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'ed448',

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'x25519',
    options: X25519KeyPairOptions<'pem', 'pem'>
    ): KeyPairSyncResult<string, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'x25519',
    options: X25519KeyPairOptions<'pem', 'der'>
    ): KeyPairSyncResult<string, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'x25519',
    options: X25519KeyPairOptions<'der', 'pem'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'x25519',
    options: X25519KeyPairOptions<'der', 'der'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'x25519',

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'x448',
    options: X448KeyPairOptions<'pem', 'pem'>
    ): KeyPairSyncResult<string, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'x448',
    options: X448KeyPairOptions<'pem', 'der'>
    ): KeyPairSyncResult<string, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'x448',
    options: X448KeyPairOptions<'der', 'pem'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, string>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'x448',
    options: X448KeyPairOptions<'der', 'der'>
    ): KeyPairSyncResult<Buffer<ArrayBufferLike>, Buffer<ArrayBufferLike>>;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    type: 'x448',

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    When encoding public keys, it is recommended to use 'spki'. When encoding private keys, it is recommended to use 'pkcs8' with a strong passphrase, and to keep the passphrase confidential.

    const {
      generateKeyPairSync,
    } = await import('node:crypto');
    
    const {
      publicKey,
      privateKey,
    } = generateKeyPairSync('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    });
    

    The return value { publicKey, privateKey } represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

  • function generateKeySync(
    type: 'hmac' | 'aes',
    options: { length: number }

    Synchronously generates a new random secret key of the given length. The type will determine which validations will be performed on the length.

    const {
      generateKeySync,
    } = await import('node:crypto');
    
    const key = generateKeySync('hmac', { length: 512 });
    console.log(key.export().toString('hex'));  // e89..........41e
    

    The size of a generated HMAC key should not exceed the block size of the underlying hash function. See createHmac for more information.

    @param type

    The intended use of the generated secret key. Currently accepted values are 'hmac' and 'aes'.

  • function generatePrime(
    size: number,
    callback: (err: null | Error, prime: ArrayBuffer) => void
    ): void;

    Generates a pseudorandom prime of size bits.

    If options.safe is true, the prime will be a safe prime -- that is, (prime - 1) / 2 will also be a prime.

    The options.add and options.rem parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:

    • If options.add and options.rem are both set, the prime will satisfy the condition that prime % add = rem.
    • If only options.add is set and options.safe is not true, the prime will satisfy the condition that prime % add = 1.
    • If only options.add is set and options.safe is set to true, the prime will instead satisfy the condition that prime % add = 3. This is necessary because prime % add = 1 for options.add > 2 would contradict the condition enforced by options.safe.
    • options.rem is ignored if options.add is not given.

    Both options.add and options.rem must be encoded as big-endian sequences if given as an ArrayBuffer, SharedArrayBuffer, TypedArray, Buffer, or DataView.

    By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the bigint option is true, then a bigint is provided.

    @param size

    The size (in bits) of the prime to generate.

    function generatePrime(
    size: number,
    callback: (err: null | Error, prime: bigint) => void
    ): void;

    Generates a pseudorandom prime of size bits.

    If options.safe is true, the prime will be a safe prime -- that is, (prime - 1) / 2 will also be a prime.

    The options.add and options.rem parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:

    • If options.add and options.rem are both set, the prime will satisfy the condition that prime % add = rem.
    • If only options.add is set and options.safe is not true, the prime will satisfy the condition that prime % add = 1.
    • If only options.add is set and options.safe is set to true, the prime will instead satisfy the condition that prime % add = 3. This is necessary because prime % add = 1 for options.add > 2 would contradict the condition enforced by options.safe.
    • options.rem is ignored if options.add is not given.

    Both options.add and options.rem must be encoded as big-endian sequences if given as an ArrayBuffer, SharedArrayBuffer, TypedArray, Buffer, or DataView.

    By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the bigint option is true, then a bigint is provided.

    @param size

    The size (in bits) of the prime to generate.

    function generatePrime(
    size: number,
    callback: (err: null | Error, prime: ArrayBuffer) => void
    ): void;

    Generates a pseudorandom prime of size bits.

    If options.safe is true, the prime will be a safe prime -- that is, (prime - 1) / 2 will also be a prime.

    The options.add and options.rem parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:

    • If options.add and options.rem are both set, the prime will satisfy the condition that prime % add = rem.
    • If only options.add is set and options.safe is not true, the prime will satisfy the condition that prime % add = 1.
    • If only options.add is set and options.safe is set to true, the prime will instead satisfy the condition that prime % add = 3. This is necessary because prime % add = 1 for options.add > 2 would contradict the condition enforced by options.safe.
    • options.rem is ignored if options.add is not given.

    Both options.add and options.rem must be encoded as big-endian sequences if given as an ArrayBuffer, SharedArrayBuffer, TypedArray, Buffer, or DataView.

    By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the bigint option is true, then a bigint is provided.

    @param size

    The size (in bits) of the prime to generate.

    function generatePrime(
    size: number,
    callback: (err: null | Error, prime: bigint | ArrayBuffer) => void
    ): void;

    Generates a pseudorandom prime of size bits.

    If options.safe is true, the prime will be a safe prime -- that is, (prime - 1) / 2 will also be a prime.

    The options.add and options.rem parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:

    • If options.add and options.rem are both set, the prime will satisfy the condition that prime % add = rem.
    • If only options.add is set and options.safe is not true, the prime will satisfy the condition that prime % add = 1.
    • If only options.add is set and options.safe is set to true, the prime will instead satisfy the condition that prime % add = 3. This is necessary because prime % add = 1 for options.add > 2 would contradict the condition enforced by options.safe.
    • options.rem is ignored if options.add is not given.

    Both options.add and options.rem must be encoded as big-endian sequences if given as an ArrayBuffer, SharedArrayBuffer, TypedArray, Buffer, or DataView.

    By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the bigint option is true, then a bigint is provided.

    @param size

    The size (in bits) of the prime to generate.

  • size: number

    Generates a pseudorandom prime of size bits.

    If options.safe is true, the prime will be a safe prime -- that is, (prime - 1) / 2 will also be a prime.

    The options.add and options.rem parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:

    • If options.add and options.rem are both set, the prime will satisfy the condition that prime % add = rem.
    • If only options.add is set and options.safe is not true, the prime will satisfy the condition that prime % add = 1.
    • If only options.add is set and options.safe is set to true, the prime will instead satisfy the condition that prime % add = 3. This is necessary because prime % add = 1 for options.add > 2 would contradict the condition enforced by options.safe.
    • options.rem is ignored if options.add is not given.

    Both options.add and options.rem must be encoded as big-endian sequences if given as an ArrayBuffer, SharedArrayBuffer, TypedArray, Buffer, or DataView.

    By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the bigint option is true, then a bigint is provided.

    @param size

    The size (in bits) of the prime to generate.

    size: number,
    ): bigint;

    Generates a pseudorandom prime of size bits.

    If options.safe is true, the prime will be a safe prime -- that is, (prime - 1) / 2 will also be a prime.

    The options.add and options.rem parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:

    • If options.add and options.rem are both set, the prime will satisfy the condition that prime % add = rem.
    • If only options.add is set and options.safe is not true, the prime will satisfy the condition that prime % add = 1.
    • If only options.add is set and options.safe is set to true, the prime will instead satisfy the condition that prime % add = 3. This is necessary because prime % add = 1 for options.add > 2 would contradict the condition enforced by options.safe.
    • options.rem is ignored if options.add is not given.

    Both options.add and options.rem must be encoded as big-endian sequences if given as an ArrayBuffer, SharedArrayBuffer, TypedArray, Buffer, or DataView.

    By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the bigint option is true, then a bigint is provided.

    @param size

    The size (in bits) of the prime to generate.

    size: number,

    Generates a pseudorandom prime of size bits.

    If options.safe is true, the prime will be a safe prime -- that is, (prime - 1) / 2 will also be a prime.

    The options.add and options.rem parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:

    • If options.add and options.rem are both set, the prime will satisfy the condition that prime % add = rem.
    • If only options.add is set and options.safe is not true, the prime will satisfy the condition that prime % add = 1.
    • If only options.add is set and options.safe is set to true, the prime will instead satisfy the condition that prime % add = 3. This is necessary because prime % add = 1 for options.add > 2 would contradict the condition enforced by options.safe.
    • options.rem is ignored if options.add is not given.

    Both options.add and options.rem must be encoded as big-endian sequences if given as an ArrayBuffer, SharedArrayBuffer, TypedArray, Buffer, or DataView.

    By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the bigint option is true, then a bigint is provided.

    @param size

    The size (in bits) of the prime to generate.

    size: number,
    ): bigint | ArrayBuffer;

    Generates a pseudorandom prime of size bits.

    If options.safe is true, the prime will be a safe prime -- that is, (prime - 1) / 2 will also be a prime.

    The options.add and options.rem parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:

    • If options.add and options.rem are both set, the prime will satisfy the condition that prime % add = rem.
    • If only options.add is set and options.safe is not true, the prime will satisfy the condition that prime % add = 1.
    • If only options.add is set and options.safe is set to true, the prime will instead satisfy the condition that prime % add = 3. This is necessary because prime % add = 1 for options.add > 2 would contradict the condition enforced by options.safe.
    • options.rem is ignored if options.add is not given.

    Both options.add and options.rem must be encoded as big-endian sequences if given as an ArrayBuffer, SharedArrayBuffer, TypedArray, Buffer, or DataView.

    By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the bigint option is true, then a bigint is provided.

    @param size

    The size (in bits) of the prime to generate.

  • function getCipherInfo(
    nameOrNid: string | number,
    ): undefined | CipherInfo;

    Returns information about a given cipher.

    Some ciphers accept variable length keys and initialization vectors. By default, the crypto.getCipherInfo() method will return the default values for these ciphers. To test if a given key length or iv length is acceptable for given cipher, use the keyLength and ivLength options. If the given values are unacceptable, undefined will be returned.

    @param nameOrNid

    The name or nid of the cipher to query.

  • function getCiphers(): string[];
    const {
      getCiphers,
    } = await import('node:crypto');
    
    console.log(getCiphers()); // ['aes-128-cbc', 'aes-128-ccm', ...]
    
    @returns

    An array with the names of the supported cipher algorithms.

  • function getCurves(): string[];
    const {
      getCurves,
    } = await import('node:crypto');
    
    console.log(getCurves()); // ['Oakley-EC2N-3', 'Oakley-EC2N-4', ...]
    
    @returns

    An array with the names of the supported elliptic curves.

  • groupName: string

    Creates a predefined DiffieHellmanGroup key exchange object. The supported groups are listed in the documentation for DiffieHellmanGroup.

    The returned object mimics the interface of objects created by createDiffieHellman, but will not allow changing the keys (with diffieHellman.setPublicKey(), for example). The advantage of using this method is that the parties do not have to generate nor exchange a group modulus beforehand, saving both processor and communication time.

    Example (obtaining a shared secret):

    const {
      getDiffieHellman,
    } = await import('node:crypto');
    const alice = getDiffieHellman('modp14');
    const bob = getDiffieHellman('modp14');
    
    alice.generateKeys();
    bob.generateKeys();
    
    const aliceSecret = alice.computeSecret(bob.getPublicKey(), null, 'hex');
    const bobSecret = bob.computeSecret(alice.getPublicKey(), null, 'hex');
    
    // aliceSecret and bobSecret should be the same
    console.log(aliceSecret === bobSecret);
    
  • function getFips(): 0 | 1;
    @returns

    1 if and only if a FIPS compliant crypto provider is currently in use, 0 otherwise. A future semver-major release may change the return type of this API to a {boolean}.

  • function getHashes(): string[];
    const {
      getHashes,
    } = await import('node:crypto');
    
    console.log(getHashes()); // ['DSA', 'DSA-SHA', 'DSA-SHA1', ...]
    
    @returns

    An array of the names of the supported hash algorithms, such as 'RSA-SHA256'. Hash algorithms are also called "digest" algorithms.

  • function getRandomValues<T extends BufferSource>(
    typedArray: T
    ): T;

    A convenient alias for webcrypto.getRandomValues. This implementation is not compliant with the Web Crypto spec, to write web-compatible code use webcrypto.getRandomValues instead.

    @returns

    Returns typedArray.

  • function hash(
    algorithm: string,
    data: BinaryLike,
    outputEncoding?: BinaryToTextEncoding
    ): string;

    A utility for creating one-shot hash digests of data. It can be faster than the object-based crypto.createHash() when hashing a smaller amount of data (<= 5MB) that's readily available. If the data can be big or if it is streamed, it's still recommended to use crypto.createHash() instead. The algorithm is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are 'sha256', 'sha512', etc. On recent releases of OpenSSL, openssl list -digest-algorithms will display the available digest algorithms.

    Example:

    import crypto from 'node:crypto';
    import { Buffer } from 'node:buffer';
    
    // Hashing a string and return the result as a hex-encoded string.
    const string = 'Node.js';
    // 10b3493287f831e81a438811a1ffba01f8cec4b7
    console.log(crypto.hash('sha1', string));
    
    // Encode a base64-encoded string into a Buffer, hash it and return
    // the result as a buffer.
    const base64 = 'Tm9kZS5qcw==';
    // <Buffer 10 b3 49 32 87 f8 31 e8 1a 43 88 11 a1 ff ba 01 f8 ce c4 b7>
    console.log(crypto.hash('sha1', Buffer.from(base64, 'base64'), 'buffer'));
    
    @param data

    When data is a string, it will be encoded as UTF-8 before being hashed. If a different input encoding is desired for a string input, user could encode the string into a TypedArray using either TextEncoder or Buffer.from() and passing the encoded TypedArray into this API instead.

    @param outputEncoding

    Encoding used to encode the returned digest.

    function hash(
    algorithm: string,
    data: BinaryLike,
    outputEncoding: 'buffer'
    ): Buffer;

    A utility for creating one-shot hash digests of data. It can be faster than the object-based crypto.createHash() when hashing a smaller amount of data (<= 5MB) that's readily available. If the data can be big or if it is streamed, it's still recommended to use crypto.createHash() instead. The algorithm is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are 'sha256', 'sha512', etc. On recent releases of OpenSSL, openssl list -digest-algorithms will display the available digest algorithms.

    Example:

    import crypto from 'node:crypto';
    import { Buffer } from 'node:buffer';
    
    // Hashing a string and return the result as a hex-encoded string.
    const string = 'Node.js';
    // 10b3493287f831e81a438811a1ffba01f8cec4b7
    console.log(crypto.hash('sha1', string));
    
    // Encode a base64-encoded string into a Buffer, hash it and return
    // the result as a buffer.
    const base64 = 'Tm9kZS5qcw==';
    // <Buffer 10 b3 49 32 87 f8 31 e8 1a 43 88 11 a1 ff ba 01 f8 ce c4 b7>
    console.log(crypto.hash('sha1', Buffer.from(base64, 'base64'), 'buffer'));
    
    @param data

    When data is a string, it will be encoded as UTF-8 before being hashed. If a different input encoding is desired for a string input, user could encode the string into a TypedArray using either TextEncoder or Buffer.from() and passing the encoded TypedArray into this API instead.

    @param outputEncoding

    Encoding used to encode the returned digest.

    function hash(
    algorithm: string,
    data: BinaryLike,
    outputEncoding?: 'buffer' | BinaryToTextEncoding
    ): string | Buffer<ArrayBufferLike>;

    A utility for creating one-shot hash digests of data. It can be faster than the object-based crypto.createHash() when hashing a smaller amount of data (<= 5MB) that's readily available. If the data can be big or if it is streamed, it's still recommended to use crypto.createHash() instead. The algorithm is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are 'sha256', 'sha512', etc. On recent releases of OpenSSL, openssl list -digest-algorithms will display the available digest algorithms.

    Example:

    import crypto from 'node:crypto';
    import { Buffer } from 'node:buffer';
    
    // Hashing a string and return the result as a hex-encoded string.
    const string = 'Node.js';
    // 10b3493287f831e81a438811a1ffba01f8cec4b7
    console.log(crypto.hash('sha1', string));
    
    // Encode a base64-encoded string into a Buffer, hash it and return
    // the result as a buffer.
    const base64 = 'Tm9kZS5qcw==';
    // <Buffer 10 b3 49 32 87 f8 31 e8 1a 43 88 11 a1 ff ba 01 f8 ce c4 b7>
    console.log(crypto.hash('sha1', Buffer.from(base64, 'base64'), 'buffer'));
    
    @param data

    When data is a string, it will be encoded as UTF-8 before being hashed. If a different input encoding is desired for a string input, user could encode the string into a TypedArray using either TextEncoder or Buffer.from() and passing the encoded TypedArray into this API instead.

    @param outputEncoding

    Encoding used to encode the returned digest.

  • function hkdf(
    digest: string,
    salt: BinaryLike,
    info: BinaryLike,
    keylen: number,
    callback: (err: null | Error, derivedKey: ArrayBuffer) => void
    ): void;

    HKDF is a simple key derivation function defined in RFC 5869. The given ikm, salt and info are used with the digest to derive a key of keylen bytes.

    The supplied callback function is called with two arguments: err and derivedKey. If an errors occurs while deriving the key, err will be set; otherwise err will be null. The successfully generated derivedKey will be passed to the callback as an ArrayBuffer. An error will be thrown if any of the input arguments specify invalid values or types.

    import { Buffer } from 'node:buffer';
    const {
      hkdf,
    } = await import('node:crypto');
    
    hkdf('sha512', 'key', 'salt', 'info', 64, (err, derivedKey) => {
      if (err) throw err;
      console.log(Buffer.from(derivedKey).toString('hex'));  // '24156e2...5391653'
    });
    
    @param digest

    The digest algorithm to use.

    @param salt

    The salt value. Must be provided but can be zero-length.

    @param info

    Additional info value. Must be provided but can be zero-length, and cannot be more than 1024 bytes.

    @param keylen

    The length of the key to generate. Must be greater than 0. The maximum allowable value is 255 times the number of bytes produced by the selected digest function (e.g. sha512 generates 64-byte hashes, making the maximum HKDF output 16320 bytes).

  • function hkdfSync(
    digest: string,
    salt: BinaryLike,
    info: BinaryLike,
    keylen: number

    Provides a synchronous HKDF key derivation function as defined in RFC 5869. The given ikm, salt and info are used with the digest to derive a key of keylen bytes.

    The successfully generated derivedKey will be returned as an ArrayBuffer.

    An error will be thrown if any of the input arguments specify invalid values or types, or if the derived key cannot be generated.

    import { Buffer } from 'node:buffer';
    const {
      hkdfSync,
    } = await import('node:crypto');
    
    const derivedKey = hkdfSync('sha512', 'key', 'salt', 'info', 64);
    console.log(Buffer.from(derivedKey).toString('hex'));  // '24156e2...5391653'
    
    @param digest

    The digest algorithm to use.

    @param ikm

    The input keying material. Must be provided but can be zero-length.

    @param salt

    The salt value. Must be provided but can be zero-length.

    @param info

    Additional info value. Must be provided but can be zero-length, and cannot be more than 1024 bytes.

    @param keylen

    The length of the key to generate. Must be greater than 0. The maximum allowable value is 255 times the number of bytes produced by the selected digest function (e.g. sha512 generates 64-byte hashes, making the maximum HKDF output 16320 bytes).

  • function pbkdf2(
    password: BinaryLike,
    salt: BinaryLike,
    iterations: number,
    keylen: number,
    digest: string,
    callback: (err: null | Error, derivedKey: Buffer) => void
    ): void;

    Provides an asynchronous Password-Based Key Derivation Function 2 (PBKDF2) implementation. A selected HMAC digest algorithm specified by digest is applied to derive a key of the requested byte length (keylen) from the password, salt and iterations.

    The supplied callback function is called with two arguments: err and derivedKey. If an error occurs while deriving the key, err will be set; otherwise err will be null. By default, the successfully generated derivedKey will be passed to the callback as a Buffer. An error will be thrown if any of the input arguments specify invalid values or types.

    The iterations argument must be a number set as high as possible. The higher the number of iterations, the more secure the derived key will be, but will take a longer amount of time to complete.

    The salt should be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.

    When passing strings for password or salt, please consider caveats when using strings as inputs to cryptographic APIs.

    const {
      pbkdf2,
    } = await import('node:crypto');
    
    pbkdf2('secret', 'salt', 100000, 64, 'sha512', (err, derivedKey) => {
      if (err) throw err;
      console.log(derivedKey.toString('hex'));  // '3745e48...08d59ae'
    });
    

    An array of supported digest functions can be retrieved using getHashes.

    This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the UV_THREADPOOL_SIZE documentation for more information.

  • function pbkdf2Sync(
    password: BinaryLike,
    salt: BinaryLike,
    iterations: number,
    keylen: number,
    digest: string
    ): Buffer;

    Provides a synchronous Password-Based Key Derivation Function 2 (PBKDF2) implementation. A selected HMAC digest algorithm specified by digest is applied to derive a key of the requested byte length (keylen) from the password, salt and iterations.

    If an error occurs an Error will be thrown, otherwise the derived key will be returned as a Buffer.

    The iterations argument must be a number set as high as possible. The higher the number of iterations, the more secure the derived key will be, but will take a longer amount of time to complete.

    The salt should be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.

    When passing strings for password or salt, please consider caveats when using strings as inputs to cryptographic APIs.

    const {
      pbkdf2Sync,
    } = await import('node:crypto');
    
    const key = pbkdf2Sync('secret', 'salt', 100000, 64, 'sha512');
    console.log(key.toString('hex'));  // '3745e48...08d59ae'
    

    An array of supported digest functions can be retrieved using getHashes.

  • function privateDecrypt(
    privateKey: RsaPrivateKey | KeyLike,
    buffer: string | ArrayBufferView<ArrayBufferLike>
    ): Buffer;

    Decrypts buffer with privateKey. buffer was previously encrypted using the corresponding public key, for example using publicEncrypt.

    If privateKey is not a KeyObject, this function behaves as if privateKey had been passed to createPrivateKey. If it is an object, the padding property can be passed. Otherwise, this function uses RSA_PKCS1_OAEP_PADDING.

  • function privateEncrypt(
    privateKey: RsaPrivateKey | KeyLike,
    buffer: string | ArrayBufferView<ArrayBufferLike>
    ): Buffer;

    Encrypts buffer with privateKey. The returned data can be decrypted using the corresponding public key, for example using publicDecrypt.

    If privateKey is not a KeyObject, this function behaves as if privateKey had been passed to createPrivateKey. If it is an object, the padding property can be passed. Otherwise, this function uses RSA_PKCS1_PADDING.

  • size: number
    ): Buffer;
    size: number,
    callback: (err: null | Error, buf: Buffer) => void
    ): void;
  • function publicDecrypt(
    buffer: string | ArrayBufferView<ArrayBufferLike>
    ): Buffer;

    Decrypts buffer with key.buffer was previously encrypted using the corresponding private key, for example using privateEncrypt.

    If key is not a KeyObject, this function behaves as if key had been passed to createPublicKey. If it is an object, the padding property can be passed. Otherwise, this function uses RSA_PKCS1_PADDING.

    Because RSA public keys can be derived from private keys, a private key may be passed instead of a public key.

  • function publicEncrypt(
    buffer: string | ArrayBufferView<ArrayBufferLike>
    ): Buffer;

    Encrypts the content of buffer with key and returns a new Buffer with encrypted content. The returned data can be decrypted using the corresponding private key, for example using privateDecrypt.

    If key is not a KeyObject, this function behaves as if key had been passed to createPublicKey. If it is an object, the padding property can be passed. Otherwise, this function uses RSA_PKCS1_OAEP_PADDING.

    Because RSA public keys can be derived from private keys, a private key may be passed instead of a public key.

  • function randomBytes(
    size: number
    ): Buffer;

    Generates cryptographically strong pseudorandom data. The size argument is a number indicating the number of bytes to generate.

    If a callback function is provided, the bytes are generated asynchronously and the callback function is invoked with two arguments: err and buf. If an error occurs, err will be an Error object; otherwise it is null. The buf argument is a Buffer containing the generated bytes.

    // Asynchronous
    const {
      randomBytes,
    } = await import('node:crypto');
    
    randomBytes(256, (err, buf) => {
      if (err) throw err;
      console.log(`${buf.length} bytes of random data: ${buf.toString('hex')}`);
    });
    

    If the callback function is not provided, the random bytes are generated synchronously and returned as a Buffer. An error will be thrown if there is a problem generating the bytes.

    // Synchronous
    const {
      randomBytes,
    } = await import('node:crypto');
    
    const buf = randomBytes(256);
    console.log(
      `${buf.length} bytes of random data: ${buf.toString('hex')}`);
    

    The crypto.randomBytes() method will not complete until there is sufficient entropy available. This should normally never take longer than a few milliseconds. The only time when generating the random bytes may conceivably block for a longer period of time is right after boot, when the whole system is still low on entropy.

    This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the UV_THREADPOOL_SIZE documentation for more information.

    The asynchronous version of crypto.randomBytes() is carried out in a single threadpool request. To minimize threadpool task length variation, partition large randomBytes requests when doing so as part of fulfilling a client request.

    @param size

    The number of bytes to generate. The size must not be larger than 2**31 - 1.

    @returns

    if the callback function is not provided.

    function randomBytes(
    size: number,
    callback: (err: null | Error, buf: Buffer) => void
    ): void;

    Generates cryptographically strong pseudorandom data. The size argument is a number indicating the number of bytes to generate.

    If a callback function is provided, the bytes are generated asynchronously and the callback function is invoked with two arguments: err and buf. If an error occurs, err will be an Error object; otherwise it is null. The buf argument is a Buffer containing the generated bytes.

    // Asynchronous
    const {
      randomBytes,
    } = await import('node:crypto');
    
    randomBytes(256, (err, buf) => {
      if (err) throw err;
      console.log(`${buf.length} bytes of random data: ${buf.toString('hex')}`);
    });
    

    If the callback function is not provided, the random bytes are generated synchronously and returned as a Buffer. An error will be thrown if there is a problem generating the bytes.

    // Synchronous
    const {
      randomBytes,
    } = await import('node:crypto');
    
    const buf = randomBytes(256);
    console.log(
      `${buf.length} bytes of random data: ${buf.toString('hex')}`);
    

    The crypto.randomBytes() method will not complete until there is sufficient entropy available. This should normally never take longer than a few milliseconds. The only time when generating the random bytes may conceivably block for a longer period of time is right after boot, when the whole system is still low on entropy.

    This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the UV_THREADPOOL_SIZE documentation for more information.

    The asynchronous version of crypto.randomBytes() is carried out in a single threadpool request. To minimize threadpool task length variation, partition large randomBytes requests when doing so as part of fulfilling a client request.

    @param size

    The number of bytes to generate. The size must not be larger than 2**31 - 1.

    @returns

    if the callback function is not provided.

  • function randomFill<T extends ArrayBufferView<ArrayBufferLike>>(
    buffer: T,
    callback: (err: null | Error, buf: T) => void
    ): void;

    This function is similar to randomBytes but requires the first argument to be a Buffer that will be filled. It also requires that a callback is passed in.

    If the callback function is not provided, an error will be thrown.

    import { Buffer } from 'node:buffer';
    const { randomFill } = await import('node:crypto');
    
    const buf = Buffer.alloc(10);
    randomFill(buf, (err, buf) => {
      if (err) throw err;
      console.log(buf.toString('hex'));
    });
    
    randomFill(buf, 5, (err, buf) => {
      if (err) throw err;
      console.log(buf.toString('hex'));
    });
    
    // The above is equivalent to the following:
    randomFill(buf, 5, 5, (err, buf) => {
      if (err) throw err;
      console.log(buf.toString('hex'));
    });
    

    Any ArrayBuffer, TypedArray, or DataView instance may be passed as buffer.

    While this includes instances of Float32Array and Float64Array, this function should not be used to generate random floating-point numbers. The result may contain +Infinity, -Infinity, and NaN, and even if the array contains finite numbers only, they are not drawn from a uniform random distribution and have no meaningful lower or upper bounds.

    import { Buffer } from 'node:buffer';
    const { randomFill } = await import('node:crypto');
    
    const a = new Uint32Array(10);
    randomFill(a, (err, buf) => {
      if (err) throw err;
      console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength)
        .toString('hex'));
    });
    
    const b = new DataView(new ArrayBuffer(10));
    randomFill(b, (err, buf) => {
      if (err) throw err;
      console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength)
        .toString('hex'));
    });
    
    const c = new ArrayBuffer(10);
    randomFill(c, (err, buf) => {
      if (err) throw err;
      console.log(Buffer.from(buf).toString('hex'));
    });
    

    This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the UV_THREADPOOL_SIZE documentation for more information.

    The asynchronous version of crypto.randomFill() is carried out in a single threadpool request. To minimize threadpool task length variation, partition large randomFill requests when doing so as part of fulfilling a client request.

    @param buffer

    Must be supplied. The size of the provided buffer must not be larger than 2**31 - 1.

    @param callback

    function(err, buf) {}.

    function randomFill<T extends ArrayBufferView<ArrayBufferLike>>(
    buffer: T,
    offset: number,
    callback: (err: null | Error, buf: T) => void
    ): void;

    This function is similar to randomBytes but requires the first argument to be a Buffer that will be filled. It also requires that a callback is passed in.

    If the callback function is not provided, an error will be thrown.

    import { Buffer } from 'node:buffer';
    const { randomFill } = await import('node:crypto');
    
    const buf = Buffer.alloc(10);
    randomFill(buf, (err, buf) => {
      if (err) throw err;
      console.log(buf.toString('hex'));
    });
    
    randomFill(buf, 5, (err, buf) => {
      if (err) throw err;
      console.log(buf.toString('hex'));
    });
    
    // The above is equivalent to the following:
    randomFill(buf, 5, 5, (err, buf) => {
      if (err) throw err;
      console.log(buf.toString('hex'));
    });
    

    Any ArrayBuffer, TypedArray, or DataView instance may be passed as buffer.

    While this includes instances of Float32Array and Float64Array, this function should not be used to generate random floating-point numbers. The result may contain +Infinity, -Infinity, and NaN, and even if the array contains finite numbers only, they are not drawn from a uniform random distribution and have no meaningful lower or upper bounds.

    import { Buffer } from 'node:buffer';
    const { randomFill } = await import('node:crypto');
    
    const a = new Uint32Array(10);
    randomFill(a, (err, buf) => {
      if (err) throw err;
      console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength)
        .toString('hex'));
    });
    
    const b = new DataView(new ArrayBuffer(10));
    randomFill(b, (err, buf) => {
      if (err) throw err;
      console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength)
        .toString('hex'));
    });
    
    const c = new ArrayBuffer(10);
    randomFill(c, (err, buf) => {
      if (err) throw err;
      console.log(Buffer.from(buf).toString('hex'));
    });
    

    This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the UV_THREADPOOL_SIZE documentation for more information.

    The asynchronous version of crypto.randomFill() is carried out in a single threadpool request. To minimize threadpool task length variation, partition large randomFill requests when doing so as part of fulfilling a client request.

    @param buffer

    Must be supplied. The size of the provided buffer must not be larger than 2**31 - 1.

    @param callback

    function(err, buf) {}.

    function randomFill<T extends ArrayBufferView<ArrayBufferLike>>(
    buffer: T,
    offset: number,
    size: number,
    callback: (err: null | Error, buf: T) => void
    ): void;

    This function is similar to randomBytes but requires the first argument to be a Buffer that will be filled. It also requires that a callback is passed in.

    If the callback function is not provided, an error will be thrown.

    import { Buffer } from 'node:buffer';
    const { randomFill } = await import('node:crypto');
    
    const buf = Buffer.alloc(10);
    randomFill(buf, (err, buf) => {
      if (err) throw err;
      console.log(buf.toString('hex'));
    });
    
    randomFill(buf, 5, (err, buf) => {
      if (err) throw err;
      console.log(buf.toString('hex'));
    });
    
    // The above is equivalent to the following:
    randomFill(buf, 5, 5, (err, buf) => {
      if (err) throw err;
      console.log(buf.toString('hex'));
    });
    

    Any ArrayBuffer, TypedArray, or DataView instance may be passed as buffer.

    While this includes instances of Float32Array and Float64Array, this function should not be used to generate random floating-point numbers. The result may contain +Infinity, -Infinity, and NaN, and even if the array contains finite numbers only, they are not drawn from a uniform random distribution and have no meaningful lower or upper bounds.

    import { Buffer } from 'node:buffer';
    const { randomFill } = await import('node:crypto');
    
    const a = new Uint32Array(10);
    randomFill(a, (err, buf) => {
      if (err) throw err;
      console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength)
        .toString('hex'));
    });
    
    const b = new DataView(new ArrayBuffer(10));
    randomFill(b, (err, buf) => {
      if (err) throw err;
      console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength)
        .toString('hex'));
    });
    
    const c = new ArrayBuffer(10);
    randomFill(c, (err, buf) => {
      if (err) throw err;
      console.log(Buffer.from(buf).toString('hex'));
    });
    

    This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the UV_THREADPOOL_SIZE documentation for more information.

    The asynchronous version of crypto.randomFill() is carried out in a single threadpool request. To minimize threadpool task length variation, partition large randomFill requests when doing so as part of fulfilling a client request.

    @param buffer

    Must be supplied. The size of the provided buffer must not be larger than 2**31 - 1.

    @param callback

    function(err, buf) {}.

  • function randomFillSync<T extends ArrayBufferView<ArrayBufferLike>>(
    buffer: T,
    offset?: number,
    size?: number
    ): T;

    Synchronous version of randomFill.

    import { Buffer } from 'node:buffer';
    const { randomFillSync } = await import('node:crypto');
    
    const buf = Buffer.alloc(10);
    console.log(randomFillSync(buf).toString('hex'));
    
    randomFillSync(buf, 5);
    console.log(buf.toString('hex'));
    
    // The above is equivalent to the following:
    randomFillSync(buf, 5, 5);
    console.log(buf.toString('hex'));
    

    Any ArrayBuffer, TypedArray or DataView instance may be passed asbuffer.

    import { Buffer } from 'node:buffer';
    const { randomFillSync } = await import('node:crypto');
    
    const a = new Uint32Array(10);
    console.log(Buffer.from(randomFillSync(a).buffer,
                            a.byteOffset, a.byteLength).toString('hex'));
    
    const b = new DataView(new ArrayBuffer(10));
    console.log(Buffer.from(randomFillSync(b).buffer,
                            b.byteOffset, b.byteLength).toString('hex'));
    
    const c = new ArrayBuffer(10);
    console.log(Buffer.from(randomFillSync(c)).toString('hex'));
    
    @param buffer

    Must be supplied. The size of the provided buffer must not be larger than 2**31 - 1.

    @returns

    The object passed as buffer argument.

  • function randomInt(
    max: number
    ): number;

    Return a random integer n such that min <= n < max. This implementation avoids modulo bias.

    The range (max - min) must be less than 2**48. min and max must be safe integers.

    If the callback function is not provided, the random integer is generated synchronously.

    // Asynchronous
    const {
      randomInt,
    } = await import('node:crypto');
    
    randomInt(3, (err, n) => {
      if (err) throw err;
      console.log(`Random number chosen from (0, 1, 2): ${n}`);
    });
    
    // Synchronous
    const {
      randomInt,
    } = await import('node:crypto');
    
    const n = randomInt(3);
    console.log(`Random number chosen from (0, 1, 2): ${n}`);
    
    // With `min` argument
    const {
      randomInt,
    } = await import('node:crypto');
    
    const n = randomInt(1, 7);
    console.log(`The dice rolled: ${n}`);
    
    @param max

    End of random range (exclusive).

    function randomInt(
    min: number,
    max: number
    ): number;

    Return a random integer n such that min <= n < max. This implementation avoids modulo bias.

    The range (max - min) must be less than 2**48. min and max must be safe integers.

    If the callback function is not provided, the random integer is generated synchronously.

    // Asynchronous
    const {
      randomInt,
    } = await import('node:crypto');
    
    randomInt(3, (err, n) => {
      if (err) throw err;
      console.log(`Random number chosen from (0, 1, 2): ${n}`);
    });
    
    // Synchronous
    const {
      randomInt,
    } = await import('node:crypto');
    
    const n = randomInt(3);
    console.log(`Random number chosen from (0, 1, 2): ${n}`);
    
    // With `min` argument
    const {
      randomInt,
    } = await import('node:crypto');
    
    const n = randomInt(1, 7);
    console.log(`The dice rolled: ${n}`);
    
    @param min

    Start of random range (inclusive).

    @param max

    End of random range (exclusive).

    function randomInt(
    max: number,
    callback: (err: null | Error, value: number) => void
    ): void;

    Return a random integer n such that min <= n < max. This implementation avoids modulo bias.

    The range (max - min) must be less than 2**48. min and max must be safe integers.

    If the callback function is not provided, the random integer is generated synchronously.

    // Asynchronous
    const {
      randomInt,
    } = await import('node:crypto');
    
    randomInt(3, (err, n) => {
      if (err) throw err;
      console.log(`Random number chosen from (0, 1, 2): ${n}`);
    });
    
    // Synchronous
    const {
      randomInt,
    } = await import('node:crypto');
    
    const n = randomInt(3);
    console.log(`Random number chosen from (0, 1, 2): ${n}`);
    
    // With `min` argument
    const {
      randomInt,
    } = await import('node:crypto');
    
    const n = randomInt(1, 7);
    console.log(`The dice rolled: ${n}`);
    
    @param max

    End of random range (exclusive).

    @param callback

    function(err, n) {}.

    function randomInt(
    min: number,
    max: number,
    callback: (err: null | Error, value: number) => void
    ): void;

    Return a random integer n such that min <= n < max. This implementation avoids modulo bias.

    The range (max - min) must be less than 2**48. min and max must be safe integers.

    If the callback function is not provided, the random integer is generated synchronously.

    // Asynchronous
    const {
      randomInt,
    } = await import('node:crypto');
    
    randomInt(3, (err, n) => {
      if (err) throw err;
      console.log(`Random number chosen from (0, 1, 2): ${n}`);
    });
    
    // Synchronous
    const {
      randomInt,
    } = await import('node:crypto');
    
    const n = randomInt(3);
    console.log(`Random number chosen from (0, 1, 2): ${n}`);
    
    // With `min` argument
    const {
      randomInt,
    } = await import('node:crypto');
    
    const n = randomInt(1, 7);
    console.log(`The dice rolled: ${n}`);
    
    @param min

    Start of random range (inclusive).

    @param max

    End of random range (exclusive).

    @param callback

    function(err, n) {}.

  • function randomUUID(
    ): `${string}-${string}-${string}-${string}-${string}`;

    Generates a random RFC 4122 version 4 UUID. The UUID is generated using a cryptographic pseudorandom number generator.

  • function scrypt(
    password: BinaryLike,
    salt: BinaryLike,
    keylen: number,
    callback: (err: null | Error, derivedKey: Buffer) => void
    ): void;

    Provides an asynchronous scrypt implementation. Scrypt is a password-based key derivation function that is designed to be expensive computationally and memory-wise in order to make brute-force attacks unrewarding.

    The salt should be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.

    When passing strings for password or salt, please consider caveats when using strings as inputs to cryptographic APIs.

    The callback function is called with two arguments: err and derivedKey. err is an exception object when key derivation fails, otherwise err is null. derivedKey is passed to the callback as a Buffer.

    An exception is thrown when any of the input arguments specify invalid values or types.

    const {
      scrypt,
    } = await import('node:crypto');
    
    // Using the factory defaults.
    scrypt('password', 'salt', 64, (err, derivedKey) => {
      if (err) throw err;
      console.log(derivedKey.toString('hex'));  // '3745e48...08d59ae'
    });
    // Using a custom N parameter. Must be a power of two.
    scrypt('password', 'salt', 64, { N: 1024 }, (err, derivedKey) => {
      if (err) throw err;
      console.log(derivedKey.toString('hex'));  // '3745e48...aa39b34'
    });
    
    function scrypt(
    password: BinaryLike,
    salt: BinaryLike,
    keylen: number,
    options: ScryptOptions,
    callback: (err: null | Error, derivedKey: Buffer) => void
    ): void;

    Provides an asynchronous scrypt implementation. Scrypt is a password-based key derivation function that is designed to be expensive computationally and memory-wise in order to make brute-force attacks unrewarding.

    The salt should be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.

    When passing strings for password or salt, please consider caveats when using strings as inputs to cryptographic APIs.

    The callback function is called with two arguments: err and derivedKey. err is an exception object when key derivation fails, otherwise err is null. derivedKey is passed to the callback as a Buffer.

    An exception is thrown when any of the input arguments specify invalid values or types.

    const {
      scrypt,
    } = await import('node:crypto');
    
    // Using the factory defaults.
    scrypt('password', 'salt', 64, (err, derivedKey) => {
      if (err) throw err;
      console.log(derivedKey.toString('hex'));  // '3745e48...08d59ae'
    });
    // Using a custom N parameter. Must be a power of two.
    scrypt('password', 'salt', 64, { N: 1024 }, (err, derivedKey) => {
      if (err) throw err;
      console.log(derivedKey.toString('hex'));  // '3745e48...aa39b34'
    });
    
  • function scryptSync(
    password: BinaryLike,
    salt: BinaryLike,
    keylen: number,
    options?: ScryptOptions
    ): Buffer;

    Provides a synchronous scrypt implementation. Scrypt is a password-based key derivation function that is designed to be expensive computationally and memory-wise in order to make brute-force attacks unrewarding.

    The salt should be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.

    When passing strings for password or salt, please consider caveats when using strings as inputs to cryptographic APIs.

    An exception is thrown when key derivation fails, otherwise the derived key is returned as a Buffer.

    An exception is thrown when any of the input arguments specify invalid values or types.

    const {
      scryptSync,
    } = await import('node:crypto');
    // Using the factory defaults.
    
    const key1 = scryptSync('password', 'salt', 64);
    console.log(key1.toString('hex'));  // '3745e48...08d59ae'
    // Using a custom N parameter. Must be a power of two.
    const key2 = scryptSync('password', 'salt', 64, { N: 1024 });
    console.log(key2.toString('hex'));  // '3745e48...aa39b34'
    
  • function setEngine(
    engine: string,
    flags?: number
    ): void;

    Load and set the engine for some or all OpenSSL functions (selected by flags).

    engine could be either an id or a path to the engine's shared library.

    The optional flags argument uses ENGINE_METHOD_ALL by default. The flags is a bit field taking one of or a mix of the following flags (defined in crypto.constants):

    • crypto.constants.ENGINE_METHOD_RSA
    • crypto.constants.ENGINE_METHOD_DSA
    • crypto.constants.ENGINE_METHOD_DH
    • crypto.constants.ENGINE_METHOD_RAND
    • crypto.constants.ENGINE_METHOD_EC
    • crypto.constants.ENGINE_METHOD_CIPHERS
    • crypto.constants.ENGINE_METHOD_DIGESTS
    • crypto.constants.ENGINE_METHOD_PKEY_METHS
    • crypto.constants.ENGINE_METHOD_PKEY_ASN1_METHS
    • crypto.constants.ENGINE_METHOD_ALL
    • crypto.constants.ENGINE_METHOD_NONE
  • function setFips(
    bool: boolean
    ): void;

    Enables the FIPS compliant crypto provider in a FIPS-enabled Node.js build. Throws an error if FIPS mode is not available.

    @param bool

    true to enable FIPS mode.

  • function sign(
    algorithm: undefined | null | string,
    data: ArrayBufferView,
    ): Buffer;

    Calculates and returns the signature for data using the given private key and algorithm. If algorithm is null or undefined, then the algorithm is dependent upon the key type (especially Ed25519 and Ed448).

    If key is not a KeyObject, this function behaves as if key had been passed to createPrivateKey. If it is an object, the following additional properties can be passed:

    If the callback function is provided this function uses libuv's threadpool.

    function sign(
    algorithm: undefined | null | string,
    data: ArrayBufferView,
    callback: (error: null | Error, data: Buffer) => void
    ): void;

    Calculates and returns the signature for data using the given private key and algorithm. If algorithm is null or undefined, then the algorithm is dependent upon the key type (especially Ed25519 and Ed448).

    If key is not a KeyObject, this function behaves as if key had been passed to createPrivateKey. If it is an object, the following additional properties can be passed:

    If the callback function is provided this function uses libuv's threadpool.

  • function timingSafeEqual(
    a: ArrayBufferView,
    b: ArrayBufferView
    ): boolean;

    This function compares the underlying bytes that represent the given ArrayBuffer, TypedArray, or DataView instances using a constant-time algorithm.

    This function does not leak timing information that would allow an attacker to guess one of the values. This is suitable for comparing HMAC digests or secret values like authentication cookies or capability urls.

    a and b must both be Buffers, TypedArrays, or DataViews, and they must have the same byte length. An error is thrown if a and b have different byte lengths.

    If at least one of a and b is a TypedArray with more than one byte per entry, such as Uint16Array, the result will be computed using the platform byte order.

    When both of the inputs are Float32Arrays or Float64Arrays, this function might return unexpected results due to IEEE 754 encoding of floating-point numbers. In particular, neither x === y nor Object.is(x, y) implies that the byte representations of two floating-point numbers x and y are equal.

    Use of crypto.timingSafeEqual does not guarantee that the surrounding code is timing-safe. Care should be taken to ensure that the surrounding code does not introduce timing vulnerabilities.

  • function verify(
    algorithm: undefined | null | string,
    data: ArrayBufferView,
    signature: ArrayBufferView
    ): boolean;

    Verifies the given signature for data using the given key and algorithm. If algorithm is null or undefined, then the algorithm is dependent upon the key type (especially Ed25519 and Ed448).

    If key is not a KeyObject, this function behaves as if key had been passed to createPublicKey. If it is an object, the following additional properties can be passed:

    The signature argument is the previously calculated signature for the data.

    Because public keys can be derived from private keys, a private key or a public key may be passed for key.

    If the callback function is provided this function uses libuv's threadpool.

    function verify(
    algorithm: undefined | null | string,
    data: ArrayBufferView,
    signature: ArrayBufferView,
    callback: (error: null | Error, result: boolean) => void
    ): void;

    Verifies the given signature for data using the given key and algorithm. If algorithm is null or undefined, then the algorithm is dependent upon the key type (especially Ed25519 and Ed448).

    If key is not a KeyObject, this function behaves as if key had been passed to createPublicKey. If it is an object, the following additional properties can be passed:

    The signature argument is the previously calculated signature for the data.

    Because public keys can be derived from private keys, a private key or a public key may be passed for key.

    If the callback function is provided this function uses libuv's threadpool.

Type definitions

  • function generateKeyPair(
    type: 'rsa',
    options: RSAKeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa',
    options: RSAKeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa',
    options: RSAKeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa',
    options: RSAKeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa',
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa-pss',
    options: RSAPSSKeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa-pss',
    options: RSAPSSKeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa-pss',
    options: RSAPSSKeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa-pss',
    options: RSAPSSKeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'rsa-pss',
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'dsa',
    options: DSAKeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'dsa',
    options: DSAKeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'dsa',
    options: DSAKeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'dsa',
    options: DSAKeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'dsa',
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ec',
    options: ECKeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ec',
    options: ECKeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ec',
    options: ECKeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ec',
    options: ECKeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ec',
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed25519',
    options: ED25519KeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed25519',
    options: ED25519KeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed25519',
    options: ED25519KeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed25519',
    options: ED25519KeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed25519',
    options: undefined | ED25519KeyPairKeyObjectOptions,
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed448',
    options: ED448KeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed448',
    options: ED448KeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed448',
    options: ED448KeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed448',
    options: ED448KeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'ed448',
    options: undefined | ED448KeyPairKeyObjectOptions,
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x25519',
    options: X25519KeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x25519',
    options: X25519KeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x25519',
    options: X25519KeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x25519',
    options: X25519KeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x25519',
    options: undefined | X25519KeyPairKeyObjectOptions,
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x448',
    options: X448KeyPairOptions<'pem', 'pem'>,
    callback: (err: null | Error, publicKey: string, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x448',
    options: X448KeyPairOptions<'pem', 'der'>,
    callback: (err: null | Error, publicKey: string, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x448',
    options: X448KeyPairOptions<'der', 'pem'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: string) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x448',
    options: X448KeyPairOptions<'der', 'der'>,
    callback: (err: null | Error, publicKey: Buffer, privateKey: Buffer) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    function generateKeyPair(
    type: 'x448',
    options: undefined | X448KeyPairKeyObjectOptions,
    callback: (err: null | Error, publicKey: KeyObject, privateKey: KeyObject) => void
    ): void;

    Generates a new asymmetric key pair of the given type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.

    If a publicKeyEncoding or privateKeyEncoding was specified, this function behaves as if keyObject.export() had been called on its result. Otherwise, the respective part of the key is returned as a KeyObject.

    It is recommended to encode public keys as 'spki' and private keys as 'pkcs8' with encryption for long-term storage:

    const {
      generateKeyPair,
    } = await import('node:crypto');
    
    generateKeyPair('rsa', {
      modulusLength: 4096,
      publicKeyEncoding: {
        type: 'spki',
        format: 'pem',
      },
      privateKeyEncoding: {
        type: 'pkcs8',
        format: 'pem',
        cipher: 'aes-256-cbc',
        passphrase: 'top secret',
      },
    }, (err, publicKey, privateKey) => {
      // Handle errors and use the generated key pair.
    });
    

    On completion, callback will be called with err set to undefined and publicKey / privateKey representing the generated key pair.

    If this method is invoked as its util.promisify() ed version, it returns a Promise for an Object with publicKey and privateKey properties.

    @param type

    Must be 'rsa', 'rsa-pss', 'dsa', 'ec', 'ed25519', 'ed448', 'x25519', 'x448', or 'dh'.

    namespace generateKeyPair

    • namespace webcrypto

      An implementation of the Web Crypto API standard.

      See the Web Crypto API documentation for details.

    • interface AsymmetricKeyDetails

    • interface BasePrivateKeyEncodingOptions<T extends KeyFormat>

    • interface CheckPrimeOptions

      • checks?: number

        The number of Miller-Rabin probabilistic primality iterations to perform. When the value is 0 (zero), a number of checks is used that yields a false positive rate of at most 2**-64 for random input. Care must be used when selecting a number of checks. Refer to the OpenSSL documentation for the BN_is_prime_ex function nchecks options for more details.

    • interface CipherCCM

      Instances of the Cipher class are used to encrypt data. The class can be used in one of two ways:

      • As a stream that is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or
      • Using the cipher.update() and cipher.final() methods to produce the encrypted data.

      The createCipheriv method is used to create Cipher instances. Cipher objects are not to be created directly using the new keyword.

      Example: Using Cipher objects as streams:

      const {
        scrypt,
        randomFill,
        createCipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      
      // First, we'll generate the key. The key length is dependent on the algorithm.
      // In this case for aes192, it is 24 bytes (192 bits).
      scrypt(password, 'salt', 24, (err, key) => {
        if (err) throw err;
        // Then, we'll generate a random initialization vector
        randomFill(new Uint8Array(16), (err, iv) => {
          if (err) throw err;
      
          // Once we have the key and iv, we can create and use the cipher...
          const cipher = createCipheriv(algorithm, key, iv);
      
          let encrypted = '';
          cipher.setEncoding('hex');
      
          cipher.on('data', (chunk) => encrypted += chunk);
          cipher.on('end', () => console.log(encrypted));
      
          cipher.write('some clear text data');
          cipher.end();
        });
      });
      

      Example: Using Cipher and piped streams:

      import {
        createReadStream,
        createWriteStream,
      } from 'node:fs';
      
      import {
        pipeline,
      } from 'node:stream';
      
      const {
        scrypt,
        randomFill,
        createCipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      
      // First, we'll generate the key. The key length is dependent on the algorithm.
      // In this case for aes192, it is 24 bytes (192 bits).
      scrypt(password, 'salt', 24, (err, key) => {
        if (err) throw err;
        // Then, we'll generate a random initialization vector
        randomFill(new Uint8Array(16), (err, iv) => {
          if (err) throw err;
      
          const cipher = createCipheriv(algorithm, key, iv);
      
          const input = createReadStream('test.js');
          const output = createWriteStream('test.enc');
      
          pipeline(input, cipher, output, (err) => {
            if (err) throw err;
          });
        });
      });
      

      Example: Using the cipher.update() and cipher.final() methods:

      const {
        scrypt,
        randomFill,
        createCipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      
      // First, we'll generate the key. The key length is dependent on the algorithm.
      // In this case for aes192, it is 24 bytes (192 bits).
      scrypt(password, 'salt', 24, (err, key) => {
        if (err) throw err;
        // Then, we'll generate a random initialization vector
        randomFill(new Uint8Array(16), (err, iv) => {
          if (err) throw err;
      
          const cipher = createCipheriv(algorithm, key, iv);
      
          let encrypted = cipher.update('some clear text data', 'utf8', 'hex');
          encrypted += cipher.final('hex');
          console.log(encrypted);
        });
      });
      
      • allowHalfOpen: boolean

        If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

        This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after readable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • readable: boolean

        Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

      • readonly readableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'end'.

      • readonly readableDidRead: boolean

        Returns whether 'data' has been emitted.

      • readonly readableEncoding: null | BufferEncoding

        Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

      • readonly readableEnded: boolean

        Becomes true when 'end' event is emitted.

      • readonly readableFlowing: null | boolean

        This property reflects the current state of a Readable stream as described in the Three states section.

      • readonly readableHighWaterMark: number

        Returns the value of highWaterMark passed when creating this Readable.

      • readonly readableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

      • readonly readableObjectMode: boolean

        Getter for the property objectMode of a given Readable stream.

      • readonly writable: boolean

        Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

      • readonly writableCorked: number

        Number of times writable.uncork() needs to be called in order to fully uncork the stream.

      • readonly writableEnded: boolean

        Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

      • readonly writableFinished: boolean

        Is set to true immediately before the 'finish' event is emitted.

      • readonly writableHighWaterMark: number

        Return the value of highWaterMark passed when creating this Writable.

      • readonly writableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

      • readonly writableNeedDrain: boolean

        Is true if the stream's buffer has been full and stream will emit 'drain'.

      • readonly writableObjectMode: boolean

        Getter for the property objectMode of a given Writable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • callback: (error?: null | Error) => void
        ): void;
      • ): void;
      • size: number
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        callback: (error?: null | Error) => void
        ): void;
      • chunks: { chunk: any; encoding: BufferEncoding }[],
        callback: (error?: null | Error) => void
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • [Symbol.asyncIterator](): AsyncIterator<any>;
      • error: Error,
        event: string | symbol,
        ...args: AnyRest
        ): void;
      • event: 'close',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'data',
        listener: (chunk: any) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'drain',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'end',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'error',
        listener: (err: Error) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'finish',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pause',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'readable',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'resume',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
      • options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

        @returns

        a stream of indexed pairs.

      • compose<T extends ReadableStream>(
        stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
        options?: { signal: AbortSignal }
        ): T;
      • cork(): void;

        The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

        The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

        See also: writable.uncork(), writable._writev().

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement readable._destroy().

        @param error

        Error which will be passed as payload in 'error' event

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks dropped from the start.

        @param limit

        the number of chunks to drop from the readable.

        @returns

        a stream with limit chunks dropped from the start.

      • event: 'close'
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        event: 'data',
        chunk: any
        ): boolean;
        event: 'drain'
        ): boolean;
        event: 'end'
        ): boolean;
        event: 'error',
        err: Error
        ): boolean;
        event: 'finish'
        ): boolean;
        event: 'pause'
        ): boolean;
        event: 'pipe',
        ): boolean;
        event: 'readable'
        ): boolean;
        event: 'resume'
        ): boolean;
        event: 'unpipe',
        ): boolean;
        event: string | symbol,
        ...args: any[]
        ): boolean;
      • cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        chunk: any,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        chunk: any,
        encoding: BufferEncoding,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding if chunk is a string

      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for every one of the chunks.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions

        This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

        @param fn

        a function to filter chunks from the stream. Async or not.

        @returns

        a stream filtered with the predicate fn.

      • Once the cipher.final() method has been called, the Cipher object can no longer be used to encrypt data. Attempts to call cipher.final() more than once will result in an error being thrown.

        @returns

        Any remaining enciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

        outputEncoding: BufferEncoding
        ): string;

        Once the cipher.final() method has been called, the Cipher object can no longer be used to encrypt data. Attempts to call cipher.final() more than once will result in an error being thrown.

        @param outputEncoding

        The encoding of the return value.

        @returns

        Any remaining enciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

      • find<T>(
        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
        options?: ArrayOptions
        ): Promise<undefined | T>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<any>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

        It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

        @param fn

        a function to map over every chunk in the stream. May be async. May be a stream or generator.

        @returns

        a stream flat-mapped with the function fn.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
        options?: ArrayOptions
        ): Promise<void>;

        This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

        This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

        This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise for when the stream has finished.

      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

      • isPaused(): boolean;

        The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

        const readable = new stream.Readable();
        
        readable.isPaused(); // === false
        readable.pause();
        readable.isPaused(); // === true
        readable.resume();
        readable.isPaused(); // === false
        
      • options?: { destroyOnReturn: boolean }
        ): AsyncIterator<any>;

        The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

      • eventName: string | symbol,
        listener?: Function
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

        @param fn

        a function to map over every chunk in the stream. Async or not.

        @returns

        a stream mapped with the function fn.

      • off<K>(
        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Alias for emitter.removeListener().

      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pause(): this;

        The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

        const readable = getReadableStreamSomehow();
        readable.on('data', (chunk) => {
          console.log(`Received ${chunk.length} bytes of data.`);
          readable.pause();
          console.log('There will be no additional data for 1 second.');
          setTimeout(() => {
            console.log('Now data will start flowing again.');
            readable.resume();
          }, 1000);
        });
        

        The readable.pause() method has no effect if there is a 'readable' event listener.

      • pipe<T extends WritableStream>(
        destination: T,
        options?: { end: boolean }
        ): T;
      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • chunk: any,
        encoding?: BufferEncoding
        ): boolean;
      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
      • size?: number
        ): any;

        The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

        The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

        If the size argument is not specified, all of the data contained in the internal buffer will be returned.

        The size argument must be less than or equal to 1 GiB.

        The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

        const readable = getReadableStreamSomehow();
        
        // 'readable' may be triggered multiple times as data is buffered in
        readable.on('readable', () => {
          let chunk;
          console.log('Stream is readable (new data received in buffer)');
          // Use a loop to make sure we read all currently available data
          while (null !== (chunk = readable.read())) {
            console.log(`Read ${chunk.length} bytes of data...`);
          }
        });
        
        // 'end' will be triggered once when there is no more data available
        readable.on('end', () => {
          console.log('Reached end of stream.');
        });
        

        Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

        Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

        const chunks = [];
        
        readable.on('readable', () => {
          let chunk;
          while (null !== (chunk = readable.read())) {
            chunks.push(chunk);
          }
        });
        
        readable.on('end', () => {
          const content = chunks.join('');
        });
        

        A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

        If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

        Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

        @param size

        Optional argument to specify how much data to read.

      • reduce<T = any>(
        fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial?: undefined,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

        reduce<T = any>(
        fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial: T,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

      • eventName?: string | symbol
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

      • event: 'close',
        listener: () => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • resume(): this;

        The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

        The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

        getReadableStreamSomehow()
          .resume()
          .on('end', () => {
            console.log('Reached the end, but did not read anything.');
          });
        

        The readable.resume() method has no effect if there is a 'readable' event listener.

      • buffer: ArrayBufferView,
        options: { plaintextLength: number }
        ): this;
      • autoPadding?: boolean
        ): this;

        When using block encryption algorithms, the Cipher class will automatically add padding to the input data to the appropriate block size. To disable the default padding call cipher.setAutoPadding(false).

        When autoPadding is false, the length of the entire input data must be a multiple of the cipher's block size or cipher.final() will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using 0x0 instead of PKCS padding.

        The cipher.setAutoPadding() method must be called before cipher.final().

        @returns

        for method chaining.

      • encoding: BufferEncoding
        ): this;

        The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

        @param encoding

        The new default encoding

      • encoding: BufferEncoding
        ): this;

        The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

        By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

        The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

        const readable = getReadableStreamSomehow();
        readable.setEncoding('utf8');
        readable.on('data', (chunk) => {
          assert.equal(typeof chunk, 'string');
          console.log('Got %d characters of string data:', chunk.length);
        });
        
        @param encoding

        The encoding to use.

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks.

        @param limit

        the number of chunks to take from the readable.

        @returns

        a stream with limit chunks taken.

      • options?: Pick<ArrayOptions, 'signal'>
        ): Promise<any[]>;

        This method allows easily obtaining the contents of a stream.

        As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

        @returns

        a promise containing an array with the contents of the stream.

      • uncork(): void;

        The writable.uncork() method flushes all data buffered since cork was called.

        When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

        stream.cork();
        stream.write('some ');
        stream.write('data ');
        process.nextTick(() => stream.uncork());
        

        If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

        stream.cork();
        stream.write('some ');
        stream.cork();
        stream.write('data ');
        process.nextTick(() => {
          stream.uncork();
          // The data will not be flushed until uncork() is called a second time.
          stream.uncork();
        });
        

        See also: writable.cork().

      • destination?: WritableStream
        ): this;

        The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

        If the destination is not specified, then all pipes are detached.

        If the destination is specified, but no pipe is set up for it, then the method does nothing.

        import fs from 'node:fs';
        const readable = getReadableStreamSomehow();
        const writable = fs.createWriteStream('file.txt');
        // All the data from readable goes into 'file.txt',
        // but only for the first second.
        readable.pipe(writable);
        setTimeout(() => {
          console.log('Stop writing to file.txt.');
          readable.unpipe(writable);
          console.log('Manually close the file stream.');
          writable.end();
        }, 1000);
        
        @param destination

        Optional specific stream to unpipe

      • chunk: any,
        encoding?: BufferEncoding
        ): void;

        Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

        The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

        The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

        Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

        // Pull off a header delimited by \n\n.
        // Use unshift() if we get too much.
        // Call the callback with (error, header, stream).
        import { StringDecoder } from 'node:string_decoder';
        function parseHeader(stream, callback) {
          stream.on('error', callback);
          stream.on('readable', onReadable);
          const decoder = new StringDecoder('utf8');
          let header = '';
          function onReadable() {
            let chunk;
            while (null !== (chunk = stream.read())) {
              const str = decoder.write(chunk);
              if (str.includes('\n\n')) {
                // Found the header boundary.
                const split = str.split(/\n\n/);
                header += split.shift();
                const remaining = split.join('\n\n');
                const buf = Buffer.from(remaining, 'utf8');
                stream.removeListener('error', callback);
                // Remove the 'readable' listener before unshifting.
                stream.removeListener('readable', onReadable);
                if (buf.length)
                  stream.unshift(buf);
                // Now the body of the message can be read from the stream.
                callback(null, header, stream);
                return;
              }
              // Still reading the header.
              header += str;
            }
          }
        }
        

        Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

        @param chunk

        Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

        @param encoding

        Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

      • ): Buffer;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        data: string,
        inputEncoding: Encoding
        ): Buffer;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data.

        data: ArrayBufferView,
        inputEncoding: undefined,
        outputEncoding: Encoding
        ): string;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data.

        @param outputEncoding

        The encoding of the return value.

        data: string,
        inputEncoding: undefined | Encoding,
        outputEncoding: Encoding
        ): string;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data.

        @param outputEncoding

        The encoding of the return value.

      • stream: ReadableStream
        ): this;

        Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

        When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

        It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

        import { OldReader } from './old-api-module.js';
        import { Readable } from 'node:stream';
        const oreader = new OldReader();
        const myReader = new Readable().wrap(oreader);
        
        myReader.on('readable', () => {
          myReader.read(); // etc.
        });
        
        @param stream

        An "old style" readable stream

      • chunk: any,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

        chunk: any,
        encoding: BufferEncoding,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding, if chunk is a string.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • interface CipherCCMOptions

    • interface CipherChaCha20Poly1305

      Instances of the Cipher class are used to encrypt data. The class can be used in one of two ways:

      • As a stream that is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or
      • Using the cipher.update() and cipher.final() methods to produce the encrypted data.

      The createCipheriv method is used to create Cipher instances. Cipher objects are not to be created directly using the new keyword.

      Example: Using Cipher objects as streams:

      const {
        scrypt,
        randomFill,
        createCipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      
      // First, we'll generate the key. The key length is dependent on the algorithm.
      // In this case for aes192, it is 24 bytes (192 bits).
      scrypt(password, 'salt', 24, (err, key) => {
        if (err) throw err;
        // Then, we'll generate a random initialization vector
        randomFill(new Uint8Array(16), (err, iv) => {
          if (err) throw err;
      
          // Once we have the key and iv, we can create and use the cipher...
          const cipher = createCipheriv(algorithm, key, iv);
      
          let encrypted = '';
          cipher.setEncoding('hex');
      
          cipher.on('data', (chunk) => encrypted += chunk);
          cipher.on('end', () => console.log(encrypted));
      
          cipher.write('some clear text data');
          cipher.end();
        });
      });
      

      Example: Using Cipher and piped streams:

      import {
        createReadStream,
        createWriteStream,
      } from 'node:fs';
      
      import {
        pipeline,
      } from 'node:stream';
      
      const {
        scrypt,
        randomFill,
        createCipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      
      // First, we'll generate the key. The key length is dependent on the algorithm.
      // In this case for aes192, it is 24 bytes (192 bits).
      scrypt(password, 'salt', 24, (err, key) => {
        if (err) throw err;
        // Then, we'll generate a random initialization vector
        randomFill(new Uint8Array(16), (err, iv) => {
          if (err) throw err;
      
          const cipher = createCipheriv(algorithm, key, iv);
      
          const input = createReadStream('test.js');
          const output = createWriteStream('test.enc');
      
          pipeline(input, cipher, output, (err) => {
            if (err) throw err;
          });
        });
      });
      

      Example: Using the cipher.update() and cipher.final() methods:

      const {
        scrypt,
        randomFill,
        createCipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      
      // First, we'll generate the key. The key length is dependent on the algorithm.
      // In this case for aes192, it is 24 bytes (192 bits).
      scrypt(password, 'salt', 24, (err, key) => {
        if (err) throw err;
        // Then, we'll generate a random initialization vector
        randomFill(new Uint8Array(16), (err, iv) => {
          if (err) throw err;
      
          const cipher = createCipheriv(algorithm, key, iv);
      
          let encrypted = cipher.update('some clear text data', 'utf8', 'hex');
          encrypted += cipher.final('hex');
          console.log(encrypted);
        });
      });
      
      • allowHalfOpen: boolean

        If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

        This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after readable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • readable: boolean

        Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

      • readonly readableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'end'.

      • readonly readableDidRead: boolean

        Returns whether 'data' has been emitted.

      • readonly readableEncoding: null | BufferEncoding

        Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

      • readonly readableEnded: boolean

        Becomes true when 'end' event is emitted.

      • readonly readableFlowing: null | boolean

        This property reflects the current state of a Readable stream as described in the Three states section.

      • readonly readableHighWaterMark: number

        Returns the value of highWaterMark passed when creating this Readable.

      • readonly readableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

      • readonly readableObjectMode: boolean

        Getter for the property objectMode of a given Readable stream.

      • readonly writable: boolean

        Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

      • readonly writableCorked: number

        Number of times writable.uncork() needs to be called in order to fully uncork the stream.

      • readonly writableEnded: boolean

        Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

      • readonly writableFinished: boolean

        Is set to true immediately before the 'finish' event is emitted.

      • readonly writableHighWaterMark: number

        Return the value of highWaterMark passed when creating this Writable.

      • readonly writableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

      • readonly writableNeedDrain: boolean

        Is true if the stream's buffer has been full and stream will emit 'drain'.

      • readonly writableObjectMode: boolean

        Getter for the property objectMode of a given Writable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • callback: (error?: null | Error) => void
        ): void;
      • ): void;
      • size: number
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        callback: (error?: null | Error) => void
        ): void;
      • chunks: { chunk: any; encoding: BufferEncoding }[],
        callback: (error?: null | Error) => void
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • [Symbol.asyncIterator](): AsyncIterator<any>;
      • error: Error,
        event: string | symbol,
        ...args: AnyRest
        ): void;
      • event: 'close',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'data',
        listener: (chunk: any) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'drain',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'end',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'error',
        listener: (err: Error) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'finish',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pause',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'readable',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'resume',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
      • options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

        @returns

        a stream of indexed pairs.

      • compose<T extends ReadableStream>(
        stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
        options?: { signal: AbortSignal }
        ): T;
      • cork(): void;

        The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

        The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

        See also: writable.uncork(), writable._writev().

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement readable._destroy().

        @param error

        Error which will be passed as payload in 'error' event

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks dropped from the start.

        @param limit

        the number of chunks to drop from the readable.

        @returns

        a stream with limit chunks dropped from the start.

      • event: 'close'
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        event: 'data',
        chunk: any
        ): boolean;
        event: 'drain'
        ): boolean;
        event: 'end'
        ): boolean;
        event: 'error',
        err: Error
        ): boolean;
        event: 'finish'
        ): boolean;
        event: 'pause'
        ): boolean;
        event: 'pipe',
        ): boolean;
        event: 'readable'
        ): boolean;
        event: 'resume'
        ): boolean;
        event: 'unpipe',
        ): boolean;
        event: string | symbol,
        ...args: any[]
        ): boolean;
      • cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        chunk: any,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        chunk: any,
        encoding: BufferEncoding,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding if chunk is a string

      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for every one of the chunks.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions

        This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

        @param fn

        a function to filter chunks from the stream. Async or not.

        @returns

        a stream filtered with the predicate fn.

      • Once the cipher.final() method has been called, the Cipher object can no longer be used to encrypt data. Attempts to call cipher.final() more than once will result in an error being thrown.

        @returns

        Any remaining enciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

        outputEncoding: BufferEncoding
        ): string;

        Once the cipher.final() method has been called, the Cipher object can no longer be used to encrypt data. Attempts to call cipher.final() more than once will result in an error being thrown.

        @param outputEncoding

        The encoding of the return value.

        @returns

        Any remaining enciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

      • find<T>(
        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
        options?: ArrayOptions
        ): Promise<undefined | T>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<any>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

        It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

        @param fn

        a function to map over every chunk in the stream. May be async. May be a stream or generator.

        @returns

        a stream flat-mapped with the function fn.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
        options?: ArrayOptions
        ): Promise<void>;

        This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

        This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

        This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise for when the stream has finished.

      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

      • isPaused(): boolean;

        The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

        const readable = new stream.Readable();
        
        readable.isPaused(); // === false
        readable.pause();
        readable.isPaused(); // === true
        readable.resume();
        readable.isPaused(); // === false
        
      • options?: { destroyOnReturn: boolean }
        ): AsyncIterator<any>;

        The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

      • eventName: string | symbol,
        listener?: Function
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

        @param fn

        a function to map over every chunk in the stream. Async or not.

        @returns

        a stream mapped with the function fn.

      • off<K>(
        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Alias for emitter.removeListener().

      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pause(): this;

        The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

        const readable = getReadableStreamSomehow();
        readable.on('data', (chunk) => {
          console.log(`Received ${chunk.length} bytes of data.`);
          readable.pause();
          console.log('There will be no additional data for 1 second.');
          setTimeout(() => {
            console.log('Now data will start flowing again.');
            readable.resume();
          }, 1000);
        });
        

        The readable.pause() method has no effect if there is a 'readable' event listener.

      • pipe<T extends WritableStream>(
        destination: T,
        options?: { end: boolean }
        ): T;
      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • chunk: any,
        encoding?: BufferEncoding
        ): boolean;
      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
      • size?: number
        ): any;

        The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

        The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

        If the size argument is not specified, all of the data contained in the internal buffer will be returned.

        The size argument must be less than or equal to 1 GiB.

        The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

        const readable = getReadableStreamSomehow();
        
        // 'readable' may be triggered multiple times as data is buffered in
        readable.on('readable', () => {
          let chunk;
          console.log('Stream is readable (new data received in buffer)');
          // Use a loop to make sure we read all currently available data
          while (null !== (chunk = readable.read())) {
            console.log(`Read ${chunk.length} bytes of data...`);
          }
        });
        
        // 'end' will be triggered once when there is no more data available
        readable.on('end', () => {
          console.log('Reached end of stream.');
        });
        

        Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

        Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

        const chunks = [];
        
        readable.on('readable', () => {
          let chunk;
          while (null !== (chunk = readable.read())) {
            chunks.push(chunk);
          }
        });
        
        readable.on('end', () => {
          const content = chunks.join('');
        });
        

        A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

        If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

        Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

        @param size

        Optional argument to specify how much data to read.

      • reduce<T = any>(
        fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial?: undefined,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

        reduce<T = any>(
        fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial: T,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

      • eventName?: string | symbol
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

      • event: 'close',
        listener: () => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • resume(): this;

        The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

        The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

        getReadableStreamSomehow()
          .resume()
          .on('end', () => {
            console.log('Reached the end, but did not read anything.');
          });
        

        The readable.resume() method has no effect if there is a 'readable' event listener.

      • buffer: ArrayBufferView,
        options: { plaintextLength: number }
        ): this;
      • autoPadding?: boolean
        ): this;

        When using block encryption algorithms, the Cipher class will automatically add padding to the input data to the appropriate block size. To disable the default padding call cipher.setAutoPadding(false).

        When autoPadding is false, the length of the entire input data must be a multiple of the cipher's block size or cipher.final() will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using 0x0 instead of PKCS padding.

        The cipher.setAutoPadding() method must be called before cipher.final().

        @returns

        for method chaining.

      • encoding: BufferEncoding
        ): this;

        The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

        @param encoding

        The new default encoding

      • encoding: BufferEncoding
        ): this;

        The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

        By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

        The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

        const readable = getReadableStreamSomehow();
        readable.setEncoding('utf8');
        readable.on('data', (chunk) => {
          assert.equal(typeof chunk, 'string');
          console.log('Got %d characters of string data:', chunk.length);
        });
        
        @param encoding

        The encoding to use.

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks.

        @param limit

        the number of chunks to take from the readable.

        @returns

        a stream with limit chunks taken.

      • options?: Pick<ArrayOptions, 'signal'>
        ): Promise<any[]>;

        This method allows easily obtaining the contents of a stream.

        As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

        @returns

        a promise containing an array with the contents of the stream.

      • uncork(): void;

        The writable.uncork() method flushes all data buffered since cork was called.

        When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

        stream.cork();
        stream.write('some ');
        stream.write('data ');
        process.nextTick(() => stream.uncork());
        

        If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

        stream.cork();
        stream.write('some ');
        stream.cork();
        stream.write('data ');
        process.nextTick(() => {
          stream.uncork();
          // The data will not be flushed until uncork() is called a second time.
          stream.uncork();
        });
        

        See also: writable.cork().

      • destination?: WritableStream
        ): this;

        The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

        If the destination is not specified, then all pipes are detached.

        If the destination is specified, but no pipe is set up for it, then the method does nothing.

        import fs from 'node:fs';
        const readable = getReadableStreamSomehow();
        const writable = fs.createWriteStream('file.txt');
        // All the data from readable goes into 'file.txt',
        // but only for the first second.
        readable.pipe(writable);
        setTimeout(() => {
          console.log('Stop writing to file.txt.');
          readable.unpipe(writable);
          console.log('Manually close the file stream.');
          writable.end();
        }, 1000);
        
        @param destination

        Optional specific stream to unpipe

      • chunk: any,
        encoding?: BufferEncoding
        ): void;

        Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

        The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

        The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

        Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

        // Pull off a header delimited by \n\n.
        // Use unshift() if we get too much.
        // Call the callback with (error, header, stream).
        import { StringDecoder } from 'node:string_decoder';
        function parseHeader(stream, callback) {
          stream.on('error', callback);
          stream.on('readable', onReadable);
          const decoder = new StringDecoder('utf8');
          let header = '';
          function onReadable() {
            let chunk;
            while (null !== (chunk = stream.read())) {
              const str = decoder.write(chunk);
              if (str.includes('\n\n')) {
                // Found the header boundary.
                const split = str.split(/\n\n/);
                header += split.shift();
                const remaining = split.join('\n\n');
                const buf = Buffer.from(remaining, 'utf8');
                stream.removeListener('error', callback);
                // Remove the 'readable' listener before unshifting.
                stream.removeListener('readable', onReadable);
                if (buf.length)
                  stream.unshift(buf);
                // Now the body of the message can be read from the stream.
                callback(null, header, stream);
                return;
              }
              // Still reading the header.
              header += str;
            }
          }
        }
        

        Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

        @param chunk

        Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

        @param encoding

        Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

      • ): Buffer;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        data: string,
        inputEncoding: Encoding
        ): Buffer;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data.

        data: ArrayBufferView,
        inputEncoding: undefined,
        outputEncoding: Encoding
        ): string;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data.

        @param outputEncoding

        The encoding of the return value.

        data: string,
        inputEncoding: undefined | Encoding,
        outputEncoding: Encoding
        ): string;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data.

        @param outputEncoding

        The encoding of the return value.

      • stream: ReadableStream
        ): this;

        Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

        When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

        It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

        import { OldReader } from './old-api-module.js';
        import { Readable } from 'node:stream';
        const oreader = new OldReader();
        const myReader = new Readable().wrap(oreader);
        
        myReader.on('readable', () => {
          myReader.read(); // etc.
        });
        
        @param stream

        An "old style" readable stream

      • chunk: any,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

        chunk: any,
        encoding: BufferEncoding,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding, if chunk is a string.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • interface CipherChaCha20Poly1305Options

    • interface CipherGCM

      Instances of the Cipher class are used to encrypt data. The class can be used in one of two ways:

      • As a stream that is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or
      • Using the cipher.update() and cipher.final() methods to produce the encrypted data.

      The createCipheriv method is used to create Cipher instances. Cipher objects are not to be created directly using the new keyword.

      Example: Using Cipher objects as streams:

      const {
        scrypt,
        randomFill,
        createCipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      
      // First, we'll generate the key. The key length is dependent on the algorithm.
      // In this case for aes192, it is 24 bytes (192 bits).
      scrypt(password, 'salt', 24, (err, key) => {
        if (err) throw err;
        // Then, we'll generate a random initialization vector
        randomFill(new Uint8Array(16), (err, iv) => {
          if (err) throw err;
      
          // Once we have the key and iv, we can create and use the cipher...
          const cipher = createCipheriv(algorithm, key, iv);
      
          let encrypted = '';
          cipher.setEncoding('hex');
      
          cipher.on('data', (chunk) => encrypted += chunk);
          cipher.on('end', () => console.log(encrypted));
      
          cipher.write('some clear text data');
          cipher.end();
        });
      });
      

      Example: Using Cipher and piped streams:

      import {
        createReadStream,
        createWriteStream,
      } from 'node:fs';
      
      import {
        pipeline,
      } from 'node:stream';
      
      const {
        scrypt,
        randomFill,
        createCipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      
      // First, we'll generate the key. The key length is dependent on the algorithm.
      // In this case for aes192, it is 24 bytes (192 bits).
      scrypt(password, 'salt', 24, (err, key) => {
        if (err) throw err;
        // Then, we'll generate a random initialization vector
        randomFill(new Uint8Array(16), (err, iv) => {
          if (err) throw err;
      
          const cipher = createCipheriv(algorithm, key, iv);
      
          const input = createReadStream('test.js');
          const output = createWriteStream('test.enc');
      
          pipeline(input, cipher, output, (err) => {
            if (err) throw err;
          });
        });
      });
      

      Example: Using the cipher.update() and cipher.final() methods:

      const {
        scrypt,
        randomFill,
        createCipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      
      // First, we'll generate the key. The key length is dependent on the algorithm.
      // In this case for aes192, it is 24 bytes (192 bits).
      scrypt(password, 'salt', 24, (err, key) => {
        if (err) throw err;
        // Then, we'll generate a random initialization vector
        randomFill(new Uint8Array(16), (err, iv) => {
          if (err) throw err;
      
          const cipher = createCipheriv(algorithm, key, iv);
      
          let encrypted = cipher.update('some clear text data', 'utf8', 'hex');
          encrypted += cipher.final('hex');
          console.log(encrypted);
        });
      });
      
      • allowHalfOpen: boolean

        If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

        This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after readable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • readable: boolean

        Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

      • readonly readableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'end'.

      • readonly readableDidRead: boolean

        Returns whether 'data' has been emitted.

      • readonly readableEncoding: null | BufferEncoding

        Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

      • readonly readableEnded: boolean

        Becomes true when 'end' event is emitted.

      • readonly readableFlowing: null | boolean

        This property reflects the current state of a Readable stream as described in the Three states section.

      • readonly readableHighWaterMark: number

        Returns the value of highWaterMark passed when creating this Readable.

      • readonly readableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

      • readonly readableObjectMode: boolean

        Getter for the property objectMode of a given Readable stream.

      • readonly writable: boolean

        Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

      • readonly writableCorked: number

        Number of times writable.uncork() needs to be called in order to fully uncork the stream.

      • readonly writableEnded: boolean

        Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

      • readonly writableFinished: boolean

        Is set to true immediately before the 'finish' event is emitted.

      • readonly writableHighWaterMark: number

        Return the value of highWaterMark passed when creating this Writable.

      • readonly writableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

      • readonly writableNeedDrain: boolean

        Is true if the stream's buffer has been full and stream will emit 'drain'.

      • readonly writableObjectMode: boolean

        Getter for the property objectMode of a given Writable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • callback: (error?: null | Error) => void
        ): void;
      • ): void;
      • size: number
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        callback: (error?: null | Error) => void
        ): void;
      • chunks: { chunk: any; encoding: BufferEncoding }[],
        callback: (error?: null | Error) => void
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • [Symbol.asyncIterator](): AsyncIterator<any>;
      • error: Error,
        event: string | symbol,
        ...args: AnyRest
        ): void;
      • event: 'close',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'data',
        listener: (chunk: any) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'drain',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'end',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'error',
        listener: (err: Error) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'finish',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pause',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'readable',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'resume',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
      • options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

        @returns

        a stream of indexed pairs.

      • compose<T extends ReadableStream>(
        stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
        options?: { signal: AbortSignal }
        ): T;
      • cork(): void;

        The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

        The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

        See also: writable.uncork(), writable._writev().

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement readable._destroy().

        @param error

        Error which will be passed as payload in 'error' event

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks dropped from the start.

        @param limit

        the number of chunks to drop from the readable.

        @returns

        a stream with limit chunks dropped from the start.

      • event: 'close'
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        event: 'data',
        chunk: any
        ): boolean;
        event: 'drain'
        ): boolean;
        event: 'end'
        ): boolean;
        event: 'error',
        err: Error
        ): boolean;
        event: 'finish'
        ): boolean;
        event: 'pause'
        ): boolean;
        event: 'pipe',
        ): boolean;
        event: 'readable'
        ): boolean;
        event: 'resume'
        ): boolean;
        event: 'unpipe',
        ): boolean;
        event: string | symbol,
        ...args: any[]
        ): boolean;
      • cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        chunk: any,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        chunk: any,
        encoding: BufferEncoding,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding if chunk is a string

      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for every one of the chunks.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions

        This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

        @param fn

        a function to filter chunks from the stream. Async or not.

        @returns

        a stream filtered with the predicate fn.

      • Once the cipher.final() method has been called, the Cipher object can no longer be used to encrypt data. Attempts to call cipher.final() more than once will result in an error being thrown.

        @returns

        Any remaining enciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

        outputEncoding: BufferEncoding
        ): string;

        Once the cipher.final() method has been called, the Cipher object can no longer be used to encrypt data. Attempts to call cipher.final() more than once will result in an error being thrown.

        @param outputEncoding

        The encoding of the return value.

        @returns

        Any remaining enciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

      • find<T>(
        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
        options?: ArrayOptions
        ): Promise<undefined | T>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<any>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

        It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

        @param fn

        a function to map over every chunk in the stream. May be async. May be a stream or generator.

        @returns

        a stream flat-mapped with the function fn.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
        options?: ArrayOptions
        ): Promise<void>;

        This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

        This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

        This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise for when the stream has finished.

      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

      • isPaused(): boolean;

        The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

        const readable = new stream.Readable();
        
        readable.isPaused(); // === false
        readable.pause();
        readable.isPaused(); // === true
        readable.resume();
        readable.isPaused(); // === false
        
      • options?: { destroyOnReturn: boolean }
        ): AsyncIterator<any>;

        The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

      • eventName: string | symbol,
        listener?: Function
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

        @param fn

        a function to map over every chunk in the stream. Async or not.

        @returns

        a stream mapped with the function fn.

      • off<K>(
        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Alias for emitter.removeListener().

      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pause(): this;

        The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

        const readable = getReadableStreamSomehow();
        readable.on('data', (chunk) => {
          console.log(`Received ${chunk.length} bytes of data.`);
          readable.pause();
          console.log('There will be no additional data for 1 second.');
          setTimeout(() => {
            console.log('Now data will start flowing again.');
            readable.resume();
          }, 1000);
        });
        

        The readable.pause() method has no effect if there is a 'readable' event listener.

      • pipe<T extends WritableStream>(
        destination: T,
        options?: { end: boolean }
        ): T;
      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • chunk: any,
        encoding?: BufferEncoding
        ): boolean;
      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
      • size?: number
        ): any;

        The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

        The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

        If the size argument is not specified, all of the data contained in the internal buffer will be returned.

        The size argument must be less than or equal to 1 GiB.

        The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

        const readable = getReadableStreamSomehow();
        
        // 'readable' may be triggered multiple times as data is buffered in
        readable.on('readable', () => {
          let chunk;
          console.log('Stream is readable (new data received in buffer)');
          // Use a loop to make sure we read all currently available data
          while (null !== (chunk = readable.read())) {
            console.log(`Read ${chunk.length} bytes of data...`);
          }
        });
        
        // 'end' will be triggered once when there is no more data available
        readable.on('end', () => {
          console.log('Reached end of stream.');
        });
        

        Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

        Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

        const chunks = [];
        
        readable.on('readable', () => {
          let chunk;
          while (null !== (chunk = readable.read())) {
            chunks.push(chunk);
          }
        });
        
        readable.on('end', () => {
          const content = chunks.join('');
        });
        

        A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

        If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

        Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

        @param size

        Optional argument to specify how much data to read.

      • reduce<T = any>(
        fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial?: undefined,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

        reduce<T = any>(
        fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial: T,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

      • eventName?: string | symbol
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

      • event: 'close',
        listener: () => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • resume(): this;

        The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

        The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

        getReadableStreamSomehow()
          .resume()
          .on('end', () => {
            console.log('Reached the end, but did not read anything.');
          });
        

        The readable.resume() method has no effect if there is a 'readable' event listener.

      • buffer: ArrayBufferView,
        options?: { plaintextLength: number }
        ): this;
      • autoPadding?: boolean
        ): this;

        When using block encryption algorithms, the Cipher class will automatically add padding to the input data to the appropriate block size. To disable the default padding call cipher.setAutoPadding(false).

        When autoPadding is false, the length of the entire input data must be a multiple of the cipher's block size or cipher.final() will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using 0x0 instead of PKCS padding.

        The cipher.setAutoPadding() method must be called before cipher.final().

        @returns

        for method chaining.

      • encoding: BufferEncoding
        ): this;

        The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

        @param encoding

        The new default encoding

      • encoding: BufferEncoding
        ): this;

        The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

        By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

        The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

        const readable = getReadableStreamSomehow();
        readable.setEncoding('utf8');
        readable.on('data', (chunk) => {
          assert.equal(typeof chunk, 'string');
          console.log('Got %d characters of string data:', chunk.length);
        });
        
        @param encoding

        The encoding to use.

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks.

        @param limit

        the number of chunks to take from the readable.

        @returns

        a stream with limit chunks taken.

      • options?: Pick<ArrayOptions, 'signal'>
        ): Promise<any[]>;

        This method allows easily obtaining the contents of a stream.

        As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

        @returns

        a promise containing an array with the contents of the stream.

      • uncork(): void;

        The writable.uncork() method flushes all data buffered since cork was called.

        When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

        stream.cork();
        stream.write('some ');
        stream.write('data ');
        process.nextTick(() => stream.uncork());
        

        If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

        stream.cork();
        stream.write('some ');
        stream.cork();
        stream.write('data ');
        process.nextTick(() => {
          stream.uncork();
          // The data will not be flushed until uncork() is called a second time.
          stream.uncork();
        });
        

        See also: writable.cork().

      • destination?: WritableStream
        ): this;

        The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

        If the destination is not specified, then all pipes are detached.

        If the destination is specified, but no pipe is set up for it, then the method does nothing.

        import fs from 'node:fs';
        const readable = getReadableStreamSomehow();
        const writable = fs.createWriteStream('file.txt');
        // All the data from readable goes into 'file.txt',
        // but only for the first second.
        readable.pipe(writable);
        setTimeout(() => {
          console.log('Stop writing to file.txt.');
          readable.unpipe(writable);
          console.log('Manually close the file stream.');
          writable.end();
        }, 1000);
        
        @param destination

        Optional specific stream to unpipe

      • chunk: any,
        encoding?: BufferEncoding
        ): void;

        Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

        The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

        The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

        Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

        // Pull off a header delimited by \n\n.
        // Use unshift() if we get too much.
        // Call the callback with (error, header, stream).
        import { StringDecoder } from 'node:string_decoder';
        function parseHeader(stream, callback) {
          stream.on('error', callback);
          stream.on('readable', onReadable);
          const decoder = new StringDecoder('utf8');
          let header = '';
          function onReadable() {
            let chunk;
            while (null !== (chunk = stream.read())) {
              const str = decoder.write(chunk);
              if (str.includes('\n\n')) {
                // Found the header boundary.
                const split = str.split(/\n\n/);
                header += split.shift();
                const remaining = split.join('\n\n');
                const buf = Buffer.from(remaining, 'utf8');
                stream.removeListener('error', callback);
                // Remove the 'readable' listener before unshifting.
                stream.removeListener('readable', onReadable);
                if (buf.length)
                  stream.unshift(buf);
                // Now the body of the message can be read from the stream.
                callback(null, header, stream);
                return;
              }
              // Still reading the header.
              header += str;
            }
          }
        }
        

        Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

        @param chunk

        Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

        @param encoding

        Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

      • ): Buffer;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        data: string,
        inputEncoding: Encoding
        ): Buffer;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data.

        data: ArrayBufferView,
        inputEncoding: undefined,
        outputEncoding: Encoding
        ): string;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data.

        @param outputEncoding

        The encoding of the return value.

        data: string,
        inputEncoding: undefined | Encoding,
        outputEncoding: Encoding
        ): string;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data.

        @param outputEncoding

        The encoding of the return value.

      • stream: ReadableStream
        ): this;

        Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

        When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

        It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

        import { OldReader } from './old-api-module.js';
        import { Readable } from 'node:stream';
        const oreader = new OldReader();
        const myReader = new Readable().wrap(oreader);
        
        myReader.on('readable', () => {
          myReader.read(); // etc.
        });
        
        @param stream

        An "old style" readable stream

      • chunk: any,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

        chunk: any,
        encoding: BufferEncoding,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding, if chunk is a string.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • interface CipherGCMOptions

    • interface CipherInfo

      • blockSize?: number

        The block size of the cipher in bytes. This property is omitted when mode is 'stream'.

      • ivLength?: number

        The expected or default initialization vector length in bytes. This property is omitted if the cipher does not use an initialization vector.

      • keyLength: number

        The expected or default key length in bytes.

      • mode: CipherMode

        The cipher mode.

      • name: string

        The name of the cipher.

      • nid: number

        The nid of the cipher.

    • interface CipherInfoOptions

    • interface CipherOCB

      Instances of the Cipher class are used to encrypt data. The class can be used in one of two ways:

      • As a stream that is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or
      • Using the cipher.update() and cipher.final() methods to produce the encrypted data.

      The createCipheriv method is used to create Cipher instances. Cipher objects are not to be created directly using the new keyword.

      Example: Using Cipher objects as streams:

      const {
        scrypt,
        randomFill,
        createCipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      
      // First, we'll generate the key. The key length is dependent on the algorithm.
      // In this case for aes192, it is 24 bytes (192 bits).
      scrypt(password, 'salt', 24, (err, key) => {
        if (err) throw err;
        // Then, we'll generate a random initialization vector
        randomFill(new Uint8Array(16), (err, iv) => {
          if (err) throw err;
      
          // Once we have the key and iv, we can create and use the cipher...
          const cipher = createCipheriv(algorithm, key, iv);
      
          let encrypted = '';
          cipher.setEncoding('hex');
      
          cipher.on('data', (chunk) => encrypted += chunk);
          cipher.on('end', () => console.log(encrypted));
      
          cipher.write('some clear text data');
          cipher.end();
        });
      });
      

      Example: Using Cipher and piped streams:

      import {
        createReadStream,
        createWriteStream,
      } from 'node:fs';
      
      import {
        pipeline,
      } from 'node:stream';
      
      const {
        scrypt,
        randomFill,
        createCipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      
      // First, we'll generate the key. The key length is dependent on the algorithm.
      // In this case for aes192, it is 24 bytes (192 bits).
      scrypt(password, 'salt', 24, (err, key) => {
        if (err) throw err;
        // Then, we'll generate a random initialization vector
        randomFill(new Uint8Array(16), (err, iv) => {
          if (err) throw err;
      
          const cipher = createCipheriv(algorithm, key, iv);
      
          const input = createReadStream('test.js');
          const output = createWriteStream('test.enc');
      
          pipeline(input, cipher, output, (err) => {
            if (err) throw err;
          });
        });
      });
      

      Example: Using the cipher.update() and cipher.final() methods:

      const {
        scrypt,
        randomFill,
        createCipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      
      // First, we'll generate the key. The key length is dependent on the algorithm.
      // In this case for aes192, it is 24 bytes (192 bits).
      scrypt(password, 'salt', 24, (err, key) => {
        if (err) throw err;
        // Then, we'll generate a random initialization vector
        randomFill(new Uint8Array(16), (err, iv) => {
          if (err) throw err;
      
          const cipher = createCipheriv(algorithm, key, iv);
      
          let encrypted = cipher.update('some clear text data', 'utf8', 'hex');
          encrypted += cipher.final('hex');
          console.log(encrypted);
        });
      });
      
      • allowHalfOpen: boolean

        If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

        This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after readable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • readable: boolean

        Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

      • readonly readableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'end'.

      • readonly readableDidRead: boolean

        Returns whether 'data' has been emitted.

      • readonly readableEncoding: null | BufferEncoding

        Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

      • readonly readableEnded: boolean

        Becomes true when 'end' event is emitted.

      • readonly readableFlowing: null | boolean

        This property reflects the current state of a Readable stream as described in the Three states section.

      • readonly readableHighWaterMark: number

        Returns the value of highWaterMark passed when creating this Readable.

      • readonly readableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

      • readonly readableObjectMode: boolean

        Getter for the property objectMode of a given Readable stream.

      • readonly writable: boolean

        Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

      • readonly writableCorked: number

        Number of times writable.uncork() needs to be called in order to fully uncork the stream.

      • readonly writableEnded: boolean

        Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

      • readonly writableFinished: boolean

        Is set to true immediately before the 'finish' event is emitted.

      • readonly writableHighWaterMark: number

        Return the value of highWaterMark passed when creating this Writable.

      • readonly writableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

      • readonly writableNeedDrain: boolean

        Is true if the stream's buffer has been full and stream will emit 'drain'.

      • readonly writableObjectMode: boolean

        Getter for the property objectMode of a given Writable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • callback: (error?: null | Error) => void
        ): void;
      • ): void;
      • size: number
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        callback: (error?: null | Error) => void
        ): void;
      • chunks: { chunk: any; encoding: BufferEncoding }[],
        callback: (error?: null | Error) => void
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • [Symbol.asyncIterator](): AsyncIterator<any>;
      • error: Error,
        event: string | symbol,
        ...args: AnyRest
        ): void;
      • event: 'close',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'data',
        listener: (chunk: any) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'drain',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'end',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'error',
        listener: (err: Error) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'finish',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pause',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'readable',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'resume',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
      • options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

        @returns

        a stream of indexed pairs.

      • compose<T extends ReadableStream>(
        stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
        options?: { signal: AbortSignal }
        ): T;
      • cork(): void;

        The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

        The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

        See also: writable.uncork(), writable._writev().

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement readable._destroy().

        @param error

        Error which will be passed as payload in 'error' event

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks dropped from the start.

        @param limit

        the number of chunks to drop from the readable.

        @returns

        a stream with limit chunks dropped from the start.

      • event: 'close'
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        event: 'data',
        chunk: any
        ): boolean;
        event: 'drain'
        ): boolean;
        event: 'end'
        ): boolean;
        event: 'error',
        err: Error
        ): boolean;
        event: 'finish'
        ): boolean;
        event: 'pause'
        ): boolean;
        event: 'pipe',
        ): boolean;
        event: 'readable'
        ): boolean;
        event: 'resume'
        ): boolean;
        event: 'unpipe',
        ): boolean;
        event: string | symbol,
        ...args: any[]
        ): boolean;
      • cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        chunk: any,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        chunk: any,
        encoding: BufferEncoding,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding if chunk is a string

      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for every one of the chunks.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions

        This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

        @param fn

        a function to filter chunks from the stream. Async or not.

        @returns

        a stream filtered with the predicate fn.

      • Once the cipher.final() method has been called, the Cipher object can no longer be used to encrypt data. Attempts to call cipher.final() more than once will result in an error being thrown.

        @returns

        Any remaining enciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

        outputEncoding: BufferEncoding
        ): string;

        Once the cipher.final() method has been called, the Cipher object can no longer be used to encrypt data. Attempts to call cipher.final() more than once will result in an error being thrown.

        @param outputEncoding

        The encoding of the return value.

        @returns

        Any remaining enciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

      • find<T>(
        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
        options?: ArrayOptions
        ): Promise<undefined | T>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<any>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

        It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

        @param fn

        a function to map over every chunk in the stream. May be async. May be a stream or generator.

        @returns

        a stream flat-mapped with the function fn.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
        options?: ArrayOptions
        ): Promise<void>;

        This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

        This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

        This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise for when the stream has finished.

      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

      • isPaused(): boolean;

        The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

        const readable = new stream.Readable();
        
        readable.isPaused(); // === false
        readable.pause();
        readable.isPaused(); // === true
        readable.resume();
        readable.isPaused(); // === false
        
      • options?: { destroyOnReturn: boolean }
        ): AsyncIterator<any>;

        The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

      • eventName: string | symbol,
        listener?: Function
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

        @param fn

        a function to map over every chunk in the stream. Async or not.

        @returns

        a stream mapped with the function fn.

      • off<K>(
        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Alias for emitter.removeListener().

      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pause(): this;

        The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

        const readable = getReadableStreamSomehow();
        readable.on('data', (chunk) => {
          console.log(`Received ${chunk.length} bytes of data.`);
          readable.pause();
          console.log('There will be no additional data for 1 second.');
          setTimeout(() => {
            console.log('Now data will start flowing again.');
            readable.resume();
          }, 1000);
        });
        

        The readable.pause() method has no effect if there is a 'readable' event listener.

      • pipe<T extends WritableStream>(
        destination: T,
        options?: { end: boolean }
        ): T;
      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • chunk: any,
        encoding?: BufferEncoding
        ): boolean;
      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
      • size?: number
        ): any;

        The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

        The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

        If the size argument is not specified, all of the data contained in the internal buffer will be returned.

        The size argument must be less than or equal to 1 GiB.

        The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

        const readable = getReadableStreamSomehow();
        
        // 'readable' may be triggered multiple times as data is buffered in
        readable.on('readable', () => {
          let chunk;
          console.log('Stream is readable (new data received in buffer)');
          // Use a loop to make sure we read all currently available data
          while (null !== (chunk = readable.read())) {
            console.log(`Read ${chunk.length} bytes of data...`);
          }
        });
        
        // 'end' will be triggered once when there is no more data available
        readable.on('end', () => {
          console.log('Reached end of stream.');
        });
        

        Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

        Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

        const chunks = [];
        
        readable.on('readable', () => {
          let chunk;
          while (null !== (chunk = readable.read())) {
            chunks.push(chunk);
          }
        });
        
        readable.on('end', () => {
          const content = chunks.join('');
        });
        

        A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

        If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

        Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

        @param size

        Optional argument to specify how much data to read.

      • reduce<T = any>(
        fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial?: undefined,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

        reduce<T = any>(
        fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial: T,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

      • eventName?: string | symbol
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

      • event: 'close',
        listener: () => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • resume(): this;

        The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

        The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

        getReadableStreamSomehow()
          .resume()
          .on('end', () => {
            console.log('Reached the end, but did not read anything.');
          });
        

        The readable.resume() method has no effect if there is a 'readable' event listener.

      • buffer: ArrayBufferView,
        options?: { plaintextLength: number }
        ): this;
      • autoPadding?: boolean
        ): this;

        When using block encryption algorithms, the Cipher class will automatically add padding to the input data to the appropriate block size. To disable the default padding call cipher.setAutoPadding(false).

        When autoPadding is false, the length of the entire input data must be a multiple of the cipher's block size or cipher.final() will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using 0x0 instead of PKCS padding.

        The cipher.setAutoPadding() method must be called before cipher.final().

        @returns

        for method chaining.

      • encoding: BufferEncoding
        ): this;

        The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

        @param encoding

        The new default encoding

      • encoding: BufferEncoding
        ): this;

        The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

        By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

        The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

        const readable = getReadableStreamSomehow();
        readable.setEncoding('utf8');
        readable.on('data', (chunk) => {
          assert.equal(typeof chunk, 'string');
          console.log('Got %d characters of string data:', chunk.length);
        });
        
        @param encoding

        The encoding to use.

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks.

        @param limit

        the number of chunks to take from the readable.

        @returns

        a stream with limit chunks taken.

      • options?: Pick<ArrayOptions, 'signal'>
        ): Promise<any[]>;

        This method allows easily obtaining the contents of a stream.

        As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

        @returns

        a promise containing an array with the contents of the stream.

      • uncork(): void;

        The writable.uncork() method flushes all data buffered since cork was called.

        When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

        stream.cork();
        stream.write('some ');
        stream.write('data ');
        process.nextTick(() => stream.uncork());
        

        If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

        stream.cork();
        stream.write('some ');
        stream.cork();
        stream.write('data ');
        process.nextTick(() => {
          stream.uncork();
          // The data will not be flushed until uncork() is called a second time.
          stream.uncork();
        });
        

        See also: writable.cork().

      • destination?: WritableStream
        ): this;

        The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

        If the destination is not specified, then all pipes are detached.

        If the destination is specified, but no pipe is set up for it, then the method does nothing.

        import fs from 'node:fs';
        const readable = getReadableStreamSomehow();
        const writable = fs.createWriteStream('file.txt');
        // All the data from readable goes into 'file.txt',
        // but only for the first second.
        readable.pipe(writable);
        setTimeout(() => {
          console.log('Stop writing to file.txt.');
          readable.unpipe(writable);
          console.log('Manually close the file stream.');
          writable.end();
        }, 1000);
        
        @param destination

        Optional specific stream to unpipe

      • chunk: any,
        encoding?: BufferEncoding
        ): void;

        Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

        The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

        The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

        Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

        // Pull off a header delimited by \n\n.
        // Use unshift() if we get too much.
        // Call the callback with (error, header, stream).
        import { StringDecoder } from 'node:string_decoder';
        function parseHeader(stream, callback) {
          stream.on('error', callback);
          stream.on('readable', onReadable);
          const decoder = new StringDecoder('utf8');
          let header = '';
          function onReadable() {
            let chunk;
            while (null !== (chunk = stream.read())) {
              const str = decoder.write(chunk);
              if (str.includes('\n\n')) {
                // Found the header boundary.
                const split = str.split(/\n\n/);
                header += split.shift();
                const remaining = split.join('\n\n');
                const buf = Buffer.from(remaining, 'utf8');
                stream.removeListener('error', callback);
                // Remove the 'readable' listener before unshifting.
                stream.removeListener('readable', onReadable);
                if (buf.length)
                  stream.unshift(buf);
                // Now the body of the message can be read from the stream.
                callback(null, header, stream);
                return;
              }
              // Still reading the header.
              header += str;
            }
          }
        }
        

        Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

        @param chunk

        Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

        @param encoding

        Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

      • ): Buffer;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        data: string,
        inputEncoding: Encoding
        ): Buffer;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data.

        data: ArrayBufferView,
        inputEncoding: undefined,
        outputEncoding: Encoding
        ): string;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data.

        @param outputEncoding

        The encoding of the return value.

        data: string,
        inputEncoding: undefined | Encoding,
        outputEncoding: Encoding
        ): string;

        Updates the cipher with data. If the inputEncoding argument is given, the dataargument is a string using the specified encoding. If the inputEncodingargument is not given, data must be a Buffer, TypedArray, or DataView. If data is a Buffer, TypedArray, or DataView, then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, a Buffer is returned.

        The cipher.update() method can be called multiple times with new data until cipher.final() is called. Calling cipher.update() after cipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data.

        @param outputEncoding

        The encoding of the return value.

      • stream: ReadableStream
        ): this;

        Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

        When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

        It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

        import { OldReader } from './old-api-module.js';
        import { Readable } from 'node:stream';
        const oreader = new OldReader();
        const myReader = new Readable().wrap(oreader);
        
        myReader.on('readable', () => {
          myReader.read(); // etc.
        });
        
        @param stream

        An "old style" readable stream

      • chunk: any,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

        chunk: any,
        encoding: BufferEncoding,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding, if chunk is a string.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • interface CipherOCBOptions

    • interface DecipherCCM

      Instances of the Decipher class are used to decrypt data. The class can be used in one of two ways:

      • As a stream that is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or
      • Using the decipher.update() and decipher.final() methods to produce the unencrypted data.

      The createDecipheriv method is used to create Decipher instances. Decipher objects are not to be created directly using the new keyword.

      Example: Using Decipher objects as streams:

      import { Buffer } from 'node:buffer';
      const {
        scryptSync,
        createDecipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      // Key length is dependent on the algorithm. In this case for aes192, it is
      // 24 bytes (192 bits).
      // Use the async `crypto.scrypt()` instead.
      const key = scryptSync(password, 'salt', 24);
      // The IV is usually passed along with the ciphertext.
      const iv = Buffer.alloc(16, 0); // Initialization vector.
      
      const decipher = createDecipheriv(algorithm, key, iv);
      
      let decrypted = '';
      decipher.on('readable', () => {
        let chunk;
        while (null !== (chunk = decipher.read())) {
          decrypted += chunk.toString('utf8');
        }
      });
      decipher.on('end', () => {
        console.log(decrypted);
        // Prints: some clear text data
      });
      
      // Encrypted with same algorithm, key and iv.
      const encrypted =
        'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';
      decipher.write(encrypted, 'hex');
      decipher.end();
      

      Example: Using Decipher and piped streams:

      import {
        createReadStream,
        createWriteStream,
      } from 'node:fs';
      import { Buffer } from 'node:buffer';
      const {
        scryptSync,
        createDecipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      // Use the async `crypto.scrypt()` instead.
      const key = scryptSync(password, 'salt', 24);
      // The IV is usually passed along with the ciphertext.
      const iv = Buffer.alloc(16, 0); // Initialization vector.
      
      const decipher = createDecipheriv(algorithm, key, iv);
      
      const input = createReadStream('test.enc');
      const output = createWriteStream('test.js');
      
      input.pipe(decipher).pipe(output);
      

      Example: Using the decipher.update() and decipher.final() methods:

      import { Buffer } from 'node:buffer';
      const {
        scryptSync,
        createDecipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      // Use the async `crypto.scrypt()` instead.
      const key = scryptSync(password, 'salt', 24);
      // The IV is usually passed along with the ciphertext.
      const iv = Buffer.alloc(16, 0); // Initialization vector.
      
      const decipher = createDecipheriv(algorithm, key, iv);
      
      // Encrypted using same algorithm, key and iv.
      const encrypted =
        'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';
      let decrypted = decipher.update(encrypted, 'hex', 'utf8');
      decrypted += decipher.final('utf8');
      console.log(decrypted);
      // Prints: some clear text data
      
      • allowHalfOpen: boolean

        If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

        This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after readable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • readable: boolean

        Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

      • readonly readableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'end'.

      • readonly readableDidRead: boolean

        Returns whether 'data' has been emitted.

      • readonly readableEncoding: null | BufferEncoding

        Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

      • readonly readableEnded: boolean

        Becomes true when 'end' event is emitted.

      • readonly readableFlowing: null | boolean

        This property reflects the current state of a Readable stream as described in the Three states section.

      • readonly readableHighWaterMark: number

        Returns the value of highWaterMark passed when creating this Readable.

      • readonly readableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

      • readonly readableObjectMode: boolean

        Getter for the property objectMode of a given Readable stream.

      • readonly writable: boolean

        Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

      • readonly writableCorked: number

        Number of times writable.uncork() needs to be called in order to fully uncork the stream.

      • readonly writableEnded: boolean

        Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

      • readonly writableFinished: boolean

        Is set to true immediately before the 'finish' event is emitted.

      • readonly writableHighWaterMark: number

        Return the value of highWaterMark passed when creating this Writable.

      • readonly writableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

      • readonly writableNeedDrain: boolean

        Is true if the stream's buffer has been full and stream will emit 'drain'.

      • readonly writableObjectMode: boolean

        Getter for the property objectMode of a given Writable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • callback: (error?: null | Error) => void
        ): void;
      • ): void;
      • size: number
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        callback: (error?: null | Error) => void
        ): void;
      • chunks: { chunk: any; encoding: BufferEncoding }[],
        callback: (error?: null | Error) => void
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • [Symbol.asyncIterator](): AsyncIterator<any>;
      • error: Error,
        event: string | symbol,
        ...args: AnyRest
        ): void;
      • event: 'close',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'data',
        listener: (chunk: any) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'drain',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'end',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'error',
        listener: (err: Error) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'finish',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pause',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'readable',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'resume',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
      • options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

        @returns

        a stream of indexed pairs.

      • compose<T extends ReadableStream>(
        stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
        options?: { signal: AbortSignal }
        ): T;
      • cork(): void;

        The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

        The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

        See also: writable.uncork(), writable._writev().

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement readable._destroy().

        @param error

        Error which will be passed as payload in 'error' event

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks dropped from the start.

        @param limit

        the number of chunks to drop from the readable.

        @returns

        a stream with limit chunks dropped from the start.

      • event: 'close'
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        event: 'data',
        chunk: any
        ): boolean;
        event: 'drain'
        ): boolean;
        event: 'end'
        ): boolean;
        event: 'error',
        err: Error
        ): boolean;
        event: 'finish'
        ): boolean;
        event: 'pause'
        ): boolean;
        event: 'pipe',
        ): boolean;
        event: 'readable'
        ): boolean;
        event: 'resume'
        ): boolean;
        event: 'unpipe',
        ): boolean;
        event: string | symbol,
        ...args: any[]
        ): boolean;
      • cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        chunk: any,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        chunk: any,
        encoding: BufferEncoding,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding if chunk is a string

      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for every one of the chunks.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions

        This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

        @param fn

        a function to filter chunks from the stream. Async or not.

        @returns

        a stream filtered with the predicate fn.

      • Once the decipher.final() method has been called, the Decipher object can no longer be used to decrypt data. Attempts to call decipher.final() more than once will result in an error being thrown.

        @returns

        Any remaining deciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

        outputEncoding: BufferEncoding
        ): string;

        Once the decipher.final() method has been called, the Decipher object can no longer be used to decrypt data. Attempts to call decipher.final() more than once will result in an error being thrown.

        @param outputEncoding

        The encoding of the return value.

        @returns

        Any remaining deciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

      • find<T>(
        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
        options?: ArrayOptions
        ): Promise<undefined | T>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<any>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

        It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

        @param fn

        a function to map over every chunk in the stream. May be async. May be a stream or generator.

        @returns

        a stream flat-mapped with the function fn.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
        options?: ArrayOptions
        ): Promise<void>;

        This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

        This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

        This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise for when the stream has finished.

      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

      • isPaused(): boolean;

        The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

        const readable = new stream.Readable();
        
        readable.isPaused(); // === false
        readable.pause();
        readable.isPaused(); // === true
        readable.resume();
        readable.isPaused(); // === false
        
      • options?: { destroyOnReturn: boolean }
        ): AsyncIterator<any>;

        The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

      • eventName: string | symbol,
        listener?: Function
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

        @param fn

        a function to map over every chunk in the stream. Async or not.

        @returns

        a stream mapped with the function fn.

      • off<K>(
        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Alias for emitter.removeListener().

      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pause(): this;

        The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

        const readable = getReadableStreamSomehow();
        readable.on('data', (chunk) => {
          console.log(`Received ${chunk.length} bytes of data.`);
          readable.pause();
          console.log('There will be no additional data for 1 second.');
          setTimeout(() => {
            console.log('Now data will start flowing again.');
            readable.resume();
          }, 1000);
        });
        

        The readable.pause() method has no effect if there is a 'readable' event listener.

      • pipe<T extends WritableStream>(
        destination: T,
        options?: { end: boolean }
        ): T;
      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • chunk: any,
        encoding?: BufferEncoding
        ): boolean;
      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
      • size?: number
        ): any;

        The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

        The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

        If the size argument is not specified, all of the data contained in the internal buffer will be returned.

        The size argument must be less than or equal to 1 GiB.

        The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

        const readable = getReadableStreamSomehow();
        
        // 'readable' may be triggered multiple times as data is buffered in
        readable.on('readable', () => {
          let chunk;
          console.log('Stream is readable (new data received in buffer)');
          // Use a loop to make sure we read all currently available data
          while (null !== (chunk = readable.read())) {
            console.log(`Read ${chunk.length} bytes of data...`);
          }
        });
        
        // 'end' will be triggered once when there is no more data available
        readable.on('end', () => {
          console.log('Reached end of stream.');
        });
        

        Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

        Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

        const chunks = [];
        
        readable.on('readable', () => {
          let chunk;
          while (null !== (chunk = readable.read())) {
            chunks.push(chunk);
          }
        });
        
        readable.on('end', () => {
          const content = chunks.join('');
        });
        

        A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

        If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

        Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

        @param size

        Optional argument to specify how much data to read.

      • reduce<T = any>(
        fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial?: undefined,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

        reduce<T = any>(
        fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial: T,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

      • eventName?: string | symbol
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

      • event: 'close',
        listener: () => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • resume(): this;

        The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

        The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

        getReadableStreamSomehow()
          .resume()
          .on('end', () => {
            console.log('Reached the end, but did not read anything.');
          });
        

        The readable.resume() method has no effect if there is a 'readable' event listener.

      • buffer: ArrayBufferView,
        options: { plaintextLength: number }
        ): this;
      • buffer: ArrayBufferView
        ): this;
      • auto_padding?: boolean
        ): this;

        When data has been encrypted without standard block padding, calling decipher.setAutoPadding(false) will disable automatic padding to prevent decipher.final() from checking for and removing padding.

        Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.

        The decipher.setAutoPadding() method must be called before decipher.final().

        @returns

        for method chaining.

      • encoding: BufferEncoding
        ): this;

        The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

        @param encoding

        The new default encoding

      • encoding: BufferEncoding
        ): this;

        The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

        By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

        The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

        const readable = getReadableStreamSomehow();
        readable.setEncoding('utf8');
        readable.on('data', (chunk) => {
          assert.equal(typeof chunk, 'string');
          console.log('Got %d characters of string data:', chunk.length);
        });
        
        @param encoding

        The encoding to use.

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks.

        @param limit

        the number of chunks to take from the readable.

        @returns

        a stream with limit chunks taken.

      • options?: Pick<ArrayOptions, 'signal'>
        ): Promise<any[]>;

        This method allows easily obtaining the contents of a stream.

        As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

        @returns

        a promise containing an array with the contents of the stream.

      • uncork(): void;

        The writable.uncork() method flushes all data buffered since cork was called.

        When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

        stream.cork();
        stream.write('some ');
        stream.write('data ');
        process.nextTick(() => stream.uncork());
        

        If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

        stream.cork();
        stream.write('some ');
        stream.cork();
        stream.write('data ');
        process.nextTick(() => {
          stream.uncork();
          // The data will not be flushed until uncork() is called a second time.
          stream.uncork();
        });
        

        See also: writable.cork().

      • destination?: WritableStream
        ): this;

        The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

        If the destination is not specified, then all pipes are detached.

        If the destination is specified, but no pipe is set up for it, then the method does nothing.

        import fs from 'node:fs';
        const readable = getReadableStreamSomehow();
        const writable = fs.createWriteStream('file.txt');
        // All the data from readable goes into 'file.txt',
        // but only for the first second.
        readable.pipe(writable);
        setTimeout(() => {
          console.log('Stop writing to file.txt.');
          readable.unpipe(writable);
          console.log('Manually close the file stream.');
          writable.end();
        }, 1000);
        
        @param destination

        Optional specific stream to unpipe

      • chunk: any,
        encoding?: BufferEncoding
        ): void;

        Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

        The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

        The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

        Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

        // Pull off a header delimited by \n\n.
        // Use unshift() if we get too much.
        // Call the callback with (error, header, stream).
        import { StringDecoder } from 'node:string_decoder';
        function parseHeader(stream, callback) {
          stream.on('error', callback);
          stream.on('readable', onReadable);
          const decoder = new StringDecoder('utf8');
          let header = '';
          function onReadable() {
            let chunk;
            while (null !== (chunk = stream.read())) {
              const str = decoder.write(chunk);
              if (str.includes('\n\n')) {
                // Found the header boundary.
                const split = str.split(/\n\n/);
                header += split.shift();
                const remaining = split.join('\n\n');
                const buf = Buffer.from(remaining, 'utf8');
                stream.removeListener('error', callback);
                // Remove the 'readable' listener before unshifting.
                stream.removeListener('readable', onReadable);
                if (buf.length)
                  stream.unshift(buf);
                // Now the body of the message can be read from the stream.
                callback(null, header, stream);
                return;
              }
              // Still reading the header.
              header += str;
            }
          }
        }
        

        Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

        @param chunk

        Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

        @param encoding

        Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

      • data: ArrayBufferView
        ): Buffer;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        data: string,
        inputEncoding: Encoding
        ): Buffer;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data string.

        data: ArrayBufferView,
        inputEncoding: undefined,
        outputEncoding: Encoding
        ): string;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data string.

        @param outputEncoding

        The encoding of the return value.

        data: string,
        inputEncoding: undefined | Encoding,
        outputEncoding: Encoding
        ): string;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data string.

        @param outputEncoding

        The encoding of the return value.

      • stream: ReadableStream
        ): this;

        Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

        When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

        It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

        import { OldReader } from './old-api-module.js';
        import { Readable } from 'node:stream';
        const oreader = new OldReader();
        const myReader = new Readable().wrap(oreader);
        
        myReader.on('readable', () => {
          myReader.read(); // etc.
        });
        
        @param stream

        An "old style" readable stream

      • chunk: any,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

        chunk: any,
        encoding: BufferEncoding,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding, if chunk is a string.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • interface DecipherChaCha20Poly1305

      Instances of the Decipher class are used to decrypt data. The class can be used in one of two ways:

      • As a stream that is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or
      • Using the decipher.update() and decipher.final() methods to produce the unencrypted data.

      The createDecipheriv method is used to create Decipher instances. Decipher objects are not to be created directly using the new keyword.

      Example: Using Decipher objects as streams:

      import { Buffer } from 'node:buffer';
      const {
        scryptSync,
        createDecipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      // Key length is dependent on the algorithm. In this case for aes192, it is
      // 24 bytes (192 bits).
      // Use the async `crypto.scrypt()` instead.
      const key = scryptSync(password, 'salt', 24);
      // The IV is usually passed along with the ciphertext.
      const iv = Buffer.alloc(16, 0); // Initialization vector.
      
      const decipher = createDecipheriv(algorithm, key, iv);
      
      let decrypted = '';
      decipher.on('readable', () => {
        let chunk;
        while (null !== (chunk = decipher.read())) {
          decrypted += chunk.toString('utf8');
        }
      });
      decipher.on('end', () => {
        console.log(decrypted);
        // Prints: some clear text data
      });
      
      // Encrypted with same algorithm, key and iv.
      const encrypted =
        'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';
      decipher.write(encrypted, 'hex');
      decipher.end();
      

      Example: Using Decipher and piped streams:

      import {
        createReadStream,
        createWriteStream,
      } from 'node:fs';
      import { Buffer } from 'node:buffer';
      const {
        scryptSync,
        createDecipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      // Use the async `crypto.scrypt()` instead.
      const key = scryptSync(password, 'salt', 24);
      // The IV is usually passed along with the ciphertext.
      const iv = Buffer.alloc(16, 0); // Initialization vector.
      
      const decipher = createDecipheriv(algorithm, key, iv);
      
      const input = createReadStream('test.enc');
      const output = createWriteStream('test.js');
      
      input.pipe(decipher).pipe(output);
      

      Example: Using the decipher.update() and decipher.final() methods:

      import { Buffer } from 'node:buffer';
      const {
        scryptSync,
        createDecipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      // Use the async `crypto.scrypt()` instead.
      const key = scryptSync(password, 'salt', 24);
      // The IV is usually passed along with the ciphertext.
      const iv = Buffer.alloc(16, 0); // Initialization vector.
      
      const decipher = createDecipheriv(algorithm, key, iv);
      
      // Encrypted using same algorithm, key and iv.
      const encrypted =
        'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';
      let decrypted = decipher.update(encrypted, 'hex', 'utf8');
      decrypted += decipher.final('utf8');
      console.log(decrypted);
      // Prints: some clear text data
      
      • allowHalfOpen: boolean

        If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

        This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after readable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • readable: boolean

        Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

      • readonly readableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'end'.

      • readonly readableDidRead: boolean

        Returns whether 'data' has been emitted.

      • readonly readableEncoding: null | BufferEncoding

        Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

      • readonly readableEnded: boolean

        Becomes true when 'end' event is emitted.

      • readonly readableFlowing: null | boolean

        This property reflects the current state of a Readable stream as described in the Three states section.

      • readonly readableHighWaterMark: number

        Returns the value of highWaterMark passed when creating this Readable.

      • readonly readableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

      • readonly readableObjectMode: boolean

        Getter for the property objectMode of a given Readable stream.

      • readonly writable: boolean

        Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

      • readonly writableCorked: number

        Number of times writable.uncork() needs to be called in order to fully uncork the stream.

      • readonly writableEnded: boolean

        Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

      • readonly writableFinished: boolean

        Is set to true immediately before the 'finish' event is emitted.

      • readonly writableHighWaterMark: number

        Return the value of highWaterMark passed when creating this Writable.

      • readonly writableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

      • readonly writableNeedDrain: boolean

        Is true if the stream's buffer has been full and stream will emit 'drain'.

      • readonly writableObjectMode: boolean

        Getter for the property objectMode of a given Writable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • callback: (error?: null | Error) => void
        ): void;
      • ): void;
      • size: number
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        callback: (error?: null | Error) => void
        ): void;
      • chunks: { chunk: any; encoding: BufferEncoding }[],
        callback: (error?: null | Error) => void
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • [Symbol.asyncIterator](): AsyncIterator<any>;
      • error: Error,
        event: string | symbol,
        ...args: AnyRest
        ): void;
      • event: 'close',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'data',
        listener: (chunk: any) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'drain',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'end',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'error',
        listener: (err: Error) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'finish',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pause',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'readable',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'resume',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
      • options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

        @returns

        a stream of indexed pairs.

      • compose<T extends ReadableStream>(
        stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
        options?: { signal: AbortSignal }
        ): T;
      • cork(): void;

        The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

        The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

        See also: writable.uncork(), writable._writev().

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement readable._destroy().

        @param error

        Error which will be passed as payload in 'error' event

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks dropped from the start.

        @param limit

        the number of chunks to drop from the readable.

        @returns

        a stream with limit chunks dropped from the start.

      • event: 'close'
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        event: 'data',
        chunk: any
        ): boolean;
        event: 'drain'
        ): boolean;
        event: 'end'
        ): boolean;
        event: 'error',
        err: Error
        ): boolean;
        event: 'finish'
        ): boolean;
        event: 'pause'
        ): boolean;
        event: 'pipe',
        ): boolean;
        event: 'readable'
        ): boolean;
        event: 'resume'
        ): boolean;
        event: 'unpipe',
        ): boolean;
        event: string | symbol,
        ...args: any[]
        ): boolean;
      • cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        chunk: any,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        chunk: any,
        encoding: BufferEncoding,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding if chunk is a string

      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for every one of the chunks.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions

        This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

        @param fn

        a function to filter chunks from the stream. Async or not.

        @returns

        a stream filtered with the predicate fn.

      • Once the decipher.final() method has been called, the Decipher object can no longer be used to decrypt data. Attempts to call decipher.final() more than once will result in an error being thrown.

        @returns

        Any remaining deciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

        outputEncoding: BufferEncoding
        ): string;

        Once the decipher.final() method has been called, the Decipher object can no longer be used to decrypt data. Attempts to call decipher.final() more than once will result in an error being thrown.

        @param outputEncoding

        The encoding of the return value.

        @returns

        Any remaining deciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

      • find<T>(
        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
        options?: ArrayOptions
        ): Promise<undefined | T>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<any>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

        It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

        @param fn

        a function to map over every chunk in the stream. May be async. May be a stream or generator.

        @returns

        a stream flat-mapped with the function fn.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
        options?: ArrayOptions
        ): Promise<void>;

        This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

        This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

        This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise for when the stream has finished.

      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

      • isPaused(): boolean;

        The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

        const readable = new stream.Readable();
        
        readable.isPaused(); // === false
        readable.pause();
        readable.isPaused(); // === true
        readable.resume();
        readable.isPaused(); // === false
        
      • options?: { destroyOnReturn: boolean }
        ): AsyncIterator<any>;

        The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

      • eventName: string | symbol,
        listener?: Function
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

        @param fn

        a function to map over every chunk in the stream. Async or not.

        @returns

        a stream mapped with the function fn.

      • off<K>(
        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Alias for emitter.removeListener().

      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pause(): this;

        The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

        const readable = getReadableStreamSomehow();
        readable.on('data', (chunk) => {
          console.log(`Received ${chunk.length} bytes of data.`);
          readable.pause();
          console.log('There will be no additional data for 1 second.');
          setTimeout(() => {
            console.log('Now data will start flowing again.');
            readable.resume();
          }, 1000);
        });
        

        The readable.pause() method has no effect if there is a 'readable' event listener.

      • pipe<T extends WritableStream>(
        destination: T,
        options?: { end: boolean }
        ): T;
      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • chunk: any,
        encoding?: BufferEncoding
        ): boolean;
      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
      • size?: number
        ): any;

        The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

        The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

        If the size argument is not specified, all of the data contained in the internal buffer will be returned.

        The size argument must be less than or equal to 1 GiB.

        The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

        const readable = getReadableStreamSomehow();
        
        // 'readable' may be triggered multiple times as data is buffered in
        readable.on('readable', () => {
          let chunk;
          console.log('Stream is readable (new data received in buffer)');
          // Use a loop to make sure we read all currently available data
          while (null !== (chunk = readable.read())) {
            console.log(`Read ${chunk.length} bytes of data...`);
          }
        });
        
        // 'end' will be triggered once when there is no more data available
        readable.on('end', () => {
          console.log('Reached end of stream.');
        });
        

        Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

        Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

        const chunks = [];
        
        readable.on('readable', () => {
          let chunk;
          while (null !== (chunk = readable.read())) {
            chunks.push(chunk);
          }
        });
        
        readable.on('end', () => {
          const content = chunks.join('');
        });
        

        A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

        If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

        Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

        @param size

        Optional argument to specify how much data to read.

      • reduce<T = any>(
        fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial?: undefined,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

        reduce<T = any>(
        fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial: T,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

      • eventName?: string | symbol
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

      • event: 'close',
        listener: () => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • resume(): this;

        The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

        The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

        getReadableStreamSomehow()
          .resume()
          .on('end', () => {
            console.log('Reached the end, but did not read anything.');
          });
        

        The readable.resume() method has no effect if there is a 'readable' event listener.

      • buffer: ArrayBufferView,
        options: { plaintextLength: number }
        ): this;
      • buffer: ArrayBufferView
        ): this;
      • auto_padding?: boolean
        ): this;

        When data has been encrypted without standard block padding, calling decipher.setAutoPadding(false) will disable automatic padding to prevent decipher.final() from checking for and removing padding.

        Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.

        The decipher.setAutoPadding() method must be called before decipher.final().

        @returns

        for method chaining.

      • encoding: BufferEncoding
        ): this;

        The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

        @param encoding

        The new default encoding

      • encoding: BufferEncoding
        ): this;

        The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

        By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

        The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

        const readable = getReadableStreamSomehow();
        readable.setEncoding('utf8');
        readable.on('data', (chunk) => {
          assert.equal(typeof chunk, 'string');
          console.log('Got %d characters of string data:', chunk.length);
        });
        
        @param encoding

        The encoding to use.

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks.

        @param limit

        the number of chunks to take from the readable.

        @returns

        a stream with limit chunks taken.

      • options?: Pick<ArrayOptions, 'signal'>
        ): Promise<any[]>;

        This method allows easily obtaining the contents of a stream.

        As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

        @returns

        a promise containing an array with the contents of the stream.

      • uncork(): void;

        The writable.uncork() method flushes all data buffered since cork was called.

        When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

        stream.cork();
        stream.write('some ');
        stream.write('data ');
        process.nextTick(() => stream.uncork());
        

        If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

        stream.cork();
        stream.write('some ');
        stream.cork();
        stream.write('data ');
        process.nextTick(() => {
          stream.uncork();
          // The data will not be flushed until uncork() is called a second time.
          stream.uncork();
        });
        

        See also: writable.cork().

      • destination?: WritableStream
        ): this;

        The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

        If the destination is not specified, then all pipes are detached.

        If the destination is specified, but no pipe is set up for it, then the method does nothing.

        import fs from 'node:fs';
        const readable = getReadableStreamSomehow();
        const writable = fs.createWriteStream('file.txt');
        // All the data from readable goes into 'file.txt',
        // but only for the first second.
        readable.pipe(writable);
        setTimeout(() => {
          console.log('Stop writing to file.txt.');
          readable.unpipe(writable);
          console.log('Manually close the file stream.');
          writable.end();
        }, 1000);
        
        @param destination

        Optional specific stream to unpipe

      • chunk: any,
        encoding?: BufferEncoding
        ): void;

        Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

        The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

        The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

        Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

        // Pull off a header delimited by \n\n.
        // Use unshift() if we get too much.
        // Call the callback with (error, header, stream).
        import { StringDecoder } from 'node:string_decoder';
        function parseHeader(stream, callback) {
          stream.on('error', callback);
          stream.on('readable', onReadable);
          const decoder = new StringDecoder('utf8');
          let header = '';
          function onReadable() {
            let chunk;
            while (null !== (chunk = stream.read())) {
              const str = decoder.write(chunk);
              if (str.includes('\n\n')) {
                // Found the header boundary.
                const split = str.split(/\n\n/);
                header += split.shift();
                const remaining = split.join('\n\n');
                const buf = Buffer.from(remaining, 'utf8');
                stream.removeListener('error', callback);
                // Remove the 'readable' listener before unshifting.
                stream.removeListener('readable', onReadable);
                if (buf.length)
                  stream.unshift(buf);
                // Now the body of the message can be read from the stream.
                callback(null, header, stream);
                return;
              }
              // Still reading the header.
              header += str;
            }
          }
        }
        

        Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

        @param chunk

        Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

        @param encoding

        Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

      • data: ArrayBufferView
        ): Buffer;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        data: string,
        inputEncoding: Encoding
        ): Buffer;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data string.

        data: ArrayBufferView,
        inputEncoding: undefined,
        outputEncoding: Encoding
        ): string;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data string.

        @param outputEncoding

        The encoding of the return value.

        data: string,
        inputEncoding: undefined | Encoding,
        outputEncoding: Encoding
        ): string;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data string.

        @param outputEncoding

        The encoding of the return value.

      • stream: ReadableStream
        ): this;

        Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

        When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

        It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

        import { OldReader } from './old-api-module.js';
        import { Readable } from 'node:stream';
        const oreader = new OldReader();
        const myReader = new Readable().wrap(oreader);
        
        myReader.on('readable', () => {
          myReader.read(); // etc.
        });
        
        @param stream

        An "old style" readable stream

      • chunk: any,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

        chunk: any,
        encoding: BufferEncoding,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding, if chunk is a string.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • interface DecipherGCM

      Instances of the Decipher class are used to decrypt data. The class can be used in one of two ways:

      • As a stream that is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or
      • Using the decipher.update() and decipher.final() methods to produce the unencrypted data.

      The createDecipheriv method is used to create Decipher instances. Decipher objects are not to be created directly using the new keyword.

      Example: Using Decipher objects as streams:

      import { Buffer } from 'node:buffer';
      const {
        scryptSync,
        createDecipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      // Key length is dependent on the algorithm. In this case for aes192, it is
      // 24 bytes (192 bits).
      // Use the async `crypto.scrypt()` instead.
      const key = scryptSync(password, 'salt', 24);
      // The IV is usually passed along with the ciphertext.
      const iv = Buffer.alloc(16, 0); // Initialization vector.
      
      const decipher = createDecipheriv(algorithm, key, iv);
      
      let decrypted = '';
      decipher.on('readable', () => {
        let chunk;
        while (null !== (chunk = decipher.read())) {
          decrypted += chunk.toString('utf8');
        }
      });
      decipher.on('end', () => {
        console.log(decrypted);
        // Prints: some clear text data
      });
      
      // Encrypted with same algorithm, key and iv.
      const encrypted =
        'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';
      decipher.write(encrypted, 'hex');
      decipher.end();
      

      Example: Using Decipher and piped streams:

      import {
        createReadStream,
        createWriteStream,
      } from 'node:fs';
      import { Buffer } from 'node:buffer';
      const {
        scryptSync,
        createDecipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      // Use the async `crypto.scrypt()` instead.
      const key = scryptSync(password, 'salt', 24);
      // The IV is usually passed along with the ciphertext.
      const iv = Buffer.alloc(16, 0); // Initialization vector.
      
      const decipher = createDecipheriv(algorithm, key, iv);
      
      const input = createReadStream('test.enc');
      const output = createWriteStream('test.js');
      
      input.pipe(decipher).pipe(output);
      

      Example: Using the decipher.update() and decipher.final() methods:

      import { Buffer } from 'node:buffer';
      const {
        scryptSync,
        createDecipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      // Use the async `crypto.scrypt()` instead.
      const key = scryptSync(password, 'salt', 24);
      // The IV is usually passed along with the ciphertext.
      const iv = Buffer.alloc(16, 0); // Initialization vector.
      
      const decipher = createDecipheriv(algorithm, key, iv);
      
      // Encrypted using same algorithm, key and iv.
      const encrypted =
        'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';
      let decrypted = decipher.update(encrypted, 'hex', 'utf8');
      decrypted += decipher.final('utf8');
      console.log(decrypted);
      // Prints: some clear text data
      
      • allowHalfOpen: boolean

        If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

        This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after readable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • readable: boolean

        Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

      • readonly readableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'end'.

      • readonly readableDidRead: boolean

        Returns whether 'data' has been emitted.

      • readonly readableEncoding: null | BufferEncoding

        Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

      • readonly readableEnded: boolean

        Becomes true when 'end' event is emitted.

      • readonly readableFlowing: null | boolean

        This property reflects the current state of a Readable stream as described in the Three states section.

      • readonly readableHighWaterMark: number

        Returns the value of highWaterMark passed when creating this Readable.

      • readonly readableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

      • readonly readableObjectMode: boolean

        Getter for the property objectMode of a given Readable stream.

      • readonly writable: boolean

        Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

      • readonly writableCorked: number

        Number of times writable.uncork() needs to be called in order to fully uncork the stream.

      • readonly writableEnded: boolean

        Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

      • readonly writableFinished: boolean

        Is set to true immediately before the 'finish' event is emitted.

      • readonly writableHighWaterMark: number

        Return the value of highWaterMark passed when creating this Writable.

      • readonly writableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

      • readonly writableNeedDrain: boolean

        Is true if the stream's buffer has been full and stream will emit 'drain'.

      • readonly writableObjectMode: boolean

        Getter for the property objectMode of a given Writable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • callback: (error?: null | Error) => void
        ): void;
      • ): void;
      • size: number
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        callback: (error?: null | Error) => void
        ): void;
      • chunks: { chunk: any; encoding: BufferEncoding }[],
        callback: (error?: null | Error) => void
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • [Symbol.asyncIterator](): AsyncIterator<any>;
      • error: Error,
        event: string | symbol,
        ...args: AnyRest
        ): void;
      • event: 'close',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'data',
        listener: (chunk: any) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'drain',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'end',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'error',
        listener: (err: Error) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'finish',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pause',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'readable',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'resume',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
      • options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

        @returns

        a stream of indexed pairs.

      • compose<T extends ReadableStream>(
        stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
        options?: { signal: AbortSignal }
        ): T;
      • cork(): void;

        The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

        The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

        See also: writable.uncork(), writable._writev().

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement readable._destroy().

        @param error

        Error which will be passed as payload in 'error' event

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks dropped from the start.

        @param limit

        the number of chunks to drop from the readable.

        @returns

        a stream with limit chunks dropped from the start.

      • event: 'close'
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        event: 'data',
        chunk: any
        ): boolean;
        event: 'drain'
        ): boolean;
        event: 'end'
        ): boolean;
        event: 'error',
        err: Error
        ): boolean;
        event: 'finish'
        ): boolean;
        event: 'pause'
        ): boolean;
        event: 'pipe',
        ): boolean;
        event: 'readable'
        ): boolean;
        event: 'resume'
        ): boolean;
        event: 'unpipe',
        ): boolean;
        event: string | symbol,
        ...args: any[]
        ): boolean;
      • cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        chunk: any,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        chunk: any,
        encoding: BufferEncoding,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding if chunk is a string

      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for every one of the chunks.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions

        This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

        @param fn

        a function to filter chunks from the stream. Async or not.

        @returns

        a stream filtered with the predicate fn.

      • Once the decipher.final() method has been called, the Decipher object can no longer be used to decrypt data. Attempts to call decipher.final() more than once will result in an error being thrown.

        @returns

        Any remaining deciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

        outputEncoding: BufferEncoding
        ): string;

        Once the decipher.final() method has been called, the Decipher object can no longer be used to decrypt data. Attempts to call decipher.final() more than once will result in an error being thrown.

        @param outputEncoding

        The encoding of the return value.

        @returns

        Any remaining deciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

      • find<T>(
        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
        options?: ArrayOptions
        ): Promise<undefined | T>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<any>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

        It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

        @param fn

        a function to map over every chunk in the stream. May be async. May be a stream or generator.

        @returns

        a stream flat-mapped with the function fn.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
        options?: ArrayOptions
        ): Promise<void>;

        This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

        This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

        This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise for when the stream has finished.

      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

      • isPaused(): boolean;

        The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

        const readable = new stream.Readable();
        
        readable.isPaused(); // === false
        readable.pause();
        readable.isPaused(); // === true
        readable.resume();
        readable.isPaused(); // === false
        
      • options?: { destroyOnReturn: boolean }
        ): AsyncIterator<any>;

        The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

      • eventName: string | symbol,
        listener?: Function
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

        @param fn

        a function to map over every chunk in the stream. Async or not.

        @returns

        a stream mapped with the function fn.

      • off<K>(
        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Alias for emitter.removeListener().

      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pause(): this;

        The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

        const readable = getReadableStreamSomehow();
        readable.on('data', (chunk) => {
          console.log(`Received ${chunk.length} bytes of data.`);
          readable.pause();
          console.log('There will be no additional data for 1 second.');
          setTimeout(() => {
            console.log('Now data will start flowing again.');
            readable.resume();
          }, 1000);
        });
        

        The readable.pause() method has no effect if there is a 'readable' event listener.

      • pipe<T extends WritableStream>(
        destination: T,
        options?: { end: boolean }
        ): T;
      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • chunk: any,
        encoding?: BufferEncoding
        ): boolean;
      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
      • size?: number
        ): any;

        The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

        The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

        If the size argument is not specified, all of the data contained in the internal buffer will be returned.

        The size argument must be less than or equal to 1 GiB.

        The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

        const readable = getReadableStreamSomehow();
        
        // 'readable' may be triggered multiple times as data is buffered in
        readable.on('readable', () => {
          let chunk;
          console.log('Stream is readable (new data received in buffer)');
          // Use a loop to make sure we read all currently available data
          while (null !== (chunk = readable.read())) {
            console.log(`Read ${chunk.length} bytes of data...`);
          }
        });
        
        // 'end' will be triggered once when there is no more data available
        readable.on('end', () => {
          console.log('Reached end of stream.');
        });
        

        Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

        Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

        const chunks = [];
        
        readable.on('readable', () => {
          let chunk;
          while (null !== (chunk = readable.read())) {
            chunks.push(chunk);
          }
        });
        
        readable.on('end', () => {
          const content = chunks.join('');
        });
        

        A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

        If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

        Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

        @param size

        Optional argument to specify how much data to read.

      • reduce<T = any>(
        fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial?: undefined,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

        reduce<T = any>(
        fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial: T,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

      • eventName?: string | symbol
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

      • event: 'close',
        listener: () => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • resume(): this;

        The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

        The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

        getReadableStreamSomehow()
          .resume()
          .on('end', () => {
            console.log('Reached the end, but did not read anything.');
          });
        

        The readable.resume() method has no effect if there is a 'readable' event listener.

      • buffer: ArrayBufferView,
        options?: { plaintextLength: number }
        ): this;
      • buffer: ArrayBufferView
        ): this;
      • auto_padding?: boolean
        ): this;

        When data has been encrypted without standard block padding, calling decipher.setAutoPadding(false) will disable automatic padding to prevent decipher.final() from checking for and removing padding.

        Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.

        The decipher.setAutoPadding() method must be called before decipher.final().

        @returns

        for method chaining.

      • encoding: BufferEncoding
        ): this;

        The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

        @param encoding

        The new default encoding

      • encoding: BufferEncoding
        ): this;

        The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

        By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

        The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

        const readable = getReadableStreamSomehow();
        readable.setEncoding('utf8');
        readable.on('data', (chunk) => {
          assert.equal(typeof chunk, 'string');
          console.log('Got %d characters of string data:', chunk.length);
        });
        
        @param encoding

        The encoding to use.

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks.

        @param limit

        the number of chunks to take from the readable.

        @returns

        a stream with limit chunks taken.

      • options?: Pick<ArrayOptions, 'signal'>
        ): Promise<any[]>;

        This method allows easily obtaining the contents of a stream.

        As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

        @returns

        a promise containing an array with the contents of the stream.

      • uncork(): void;

        The writable.uncork() method flushes all data buffered since cork was called.

        When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

        stream.cork();
        stream.write('some ');
        stream.write('data ');
        process.nextTick(() => stream.uncork());
        

        If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

        stream.cork();
        stream.write('some ');
        stream.cork();
        stream.write('data ');
        process.nextTick(() => {
          stream.uncork();
          // The data will not be flushed until uncork() is called a second time.
          stream.uncork();
        });
        

        See also: writable.cork().

      • destination?: WritableStream
        ): this;

        The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

        If the destination is not specified, then all pipes are detached.

        If the destination is specified, but no pipe is set up for it, then the method does nothing.

        import fs from 'node:fs';
        const readable = getReadableStreamSomehow();
        const writable = fs.createWriteStream('file.txt');
        // All the data from readable goes into 'file.txt',
        // but only for the first second.
        readable.pipe(writable);
        setTimeout(() => {
          console.log('Stop writing to file.txt.');
          readable.unpipe(writable);
          console.log('Manually close the file stream.');
          writable.end();
        }, 1000);
        
        @param destination

        Optional specific stream to unpipe

      • chunk: any,
        encoding?: BufferEncoding
        ): void;

        Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

        The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

        The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

        Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

        // Pull off a header delimited by \n\n.
        // Use unshift() if we get too much.
        // Call the callback with (error, header, stream).
        import { StringDecoder } from 'node:string_decoder';
        function parseHeader(stream, callback) {
          stream.on('error', callback);
          stream.on('readable', onReadable);
          const decoder = new StringDecoder('utf8');
          let header = '';
          function onReadable() {
            let chunk;
            while (null !== (chunk = stream.read())) {
              const str = decoder.write(chunk);
              if (str.includes('\n\n')) {
                // Found the header boundary.
                const split = str.split(/\n\n/);
                header += split.shift();
                const remaining = split.join('\n\n');
                const buf = Buffer.from(remaining, 'utf8');
                stream.removeListener('error', callback);
                // Remove the 'readable' listener before unshifting.
                stream.removeListener('readable', onReadable);
                if (buf.length)
                  stream.unshift(buf);
                // Now the body of the message can be read from the stream.
                callback(null, header, stream);
                return;
              }
              // Still reading the header.
              header += str;
            }
          }
        }
        

        Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

        @param chunk

        Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

        @param encoding

        Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

      • data: ArrayBufferView
        ): Buffer;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        data: string,
        inputEncoding: Encoding
        ): Buffer;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data string.

        data: ArrayBufferView,
        inputEncoding: undefined,
        outputEncoding: Encoding
        ): string;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data string.

        @param outputEncoding

        The encoding of the return value.

        data: string,
        inputEncoding: undefined | Encoding,
        outputEncoding: Encoding
        ): string;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data string.

        @param outputEncoding

        The encoding of the return value.

      • stream: ReadableStream
        ): this;

        Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

        When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

        It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

        import { OldReader } from './old-api-module.js';
        import { Readable } from 'node:stream';
        const oreader = new OldReader();
        const myReader = new Readable().wrap(oreader);
        
        myReader.on('readable', () => {
          myReader.read(); // etc.
        });
        
        @param stream

        An "old style" readable stream

      • chunk: any,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

        chunk: any,
        encoding: BufferEncoding,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding, if chunk is a string.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • interface DecipherOCB

      Instances of the Decipher class are used to decrypt data. The class can be used in one of two ways:

      • As a stream that is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or
      • Using the decipher.update() and decipher.final() methods to produce the unencrypted data.

      The createDecipheriv method is used to create Decipher instances. Decipher objects are not to be created directly using the new keyword.

      Example: Using Decipher objects as streams:

      import { Buffer } from 'node:buffer';
      const {
        scryptSync,
        createDecipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      // Key length is dependent on the algorithm. In this case for aes192, it is
      // 24 bytes (192 bits).
      // Use the async `crypto.scrypt()` instead.
      const key = scryptSync(password, 'salt', 24);
      // The IV is usually passed along with the ciphertext.
      const iv = Buffer.alloc(16, 0); // Initialization vector.
      
      const decipher = createDecipheriv(algorithm, key, iv);
      
      let decrypted = '';
      decipher.on('readable', () => {
        let chunk;
        while (null !== (chunk = decipher.read())) {
          decrypted += chunk.toString('utf8');
        }
      });
      decipher.on('end', () => {
        console.log(decrypted);
        // Prints: some clear text data
      });
      
      // Encrypted with same algorithm, key and iv.
      const encrypted =
        'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';
      decipher.write(encrypted, 'hex');
      decipher.end();
      

      Example: Using Decipher and piped streams:

      import {
        createReadStream,
        createWriteStream,
      } from 'node:fs';
      import { Buffer } from 'node:buffer';
      const {
        scryptSync,
        createDecipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      // Use the async `crypto.scrypt()` instead.
      const key = scryptSync(password, 'salt', 24);
      // The IV is usually passed along with the ciphertext.
      const iv = Buffer.alloc(16, 0); // Initialization vector.
      
      const decipher = createDecipheriv(algorithm, key, iv);
      
      const input = createReadStream('test.enc');
      const output = createWriteStream('test.js');
      
      input.pipe(decipher).pipe(output);
      

      Example: Using the decipher.update() and decipher.final() methods:

      import { Buffer } from 'node:buffer';
      const {
        scryptSync,
        createDecipheriv,
      } = await import('node:crypto');
      
      const algorithm = 'aes-192-cbc';
      const password = 'Password used to generate key';
      // Use the async `crypto.scrypt()` instead.
      const key = scryptSync(password, 'salt', 24);
      // The IV is usually passed along with the ciphertext.
      const iv = Buffer.alloc(16, 0); // Initialization vector.
      
      const decipher = createDecipheriv(algorithm, key, iv);
      
      // Encrypted using same algorithm, key and iv.
      const encrypted =
        'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';
      let decrypted = decipher.update(encrypted, 'hex', 'utf8');
      decrypted += decipher.final('utf8');
      console.log(decrypted);
      // Prints: some clear text data
      
      • allowHalfOpen: boolean

        If false then the stream will automatically end the writable side when the readable side ends. Set initially by the allowHalfOpen constructor option, which defaults to true.

        This can be changed manually to change the half-open behavior of an existing Duplex stream instance, but must be changed before the 'end' event is emitted.

      • readonly closed: boolean

        Is true after 'close' has been emitted.

      • destroyed: boolean

        Is true after readable.destroy() has been called.

      • readonly errored: null | Error

        Returns error if the stream has been destroyed with an error.

      • readable: boolean

        Is true if it is safe to call read, which means the stream has not been destroyed or emitted 'error' or 'end'.

      • readonly readableAborted: boolean

        Returns whether the stream was destroyed or errored before emitting 'end'.

      • readonly readableDidRead: boolean

        Returns whether 'data' has been emitted.

      • readonly readableEncoding: null | BufferEncoding

        Getter for the property encoding of a given Readable stream. The encoding property can be set using the setEncoding method.

      • readonly readableEnded: boolean

        Becomes true when 'end' event is emitted.

      • readonly readableFlowing: null | boolean

        This property reflects the current state of a Readable stream as described in the Three states section.

      • readonly readableHighWaterMark: number

        Returns the value of highWaterMark passed when creating this Readable.

      • readonly readableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the highWaterMark.

      • readonly readableObjectMode: boolean

        Getter for the property objectMode of a given Readable stream.

      • readonly writable: boolean

        Is true if it is safe to call writable.write(), which means the stream has not been destroyed, errored, or ended.

      • readonly writableCorked: number

        Number of times writable.uncork() needs to be called in order to fully uncork the stream.

      • readonly writableEnded: boolean

        Is true after writable.end() has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished instead.

      • readonly writableFinished: boolean

        Is set to true immediately before the 'finish' event is emitted.

      • readonly writableHighWaterMark: number

        Return the value of highWaterMark passed when creating this Writable.

      • readonly writableLength: number

        This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark.

      • readonly writableNeedDrain: boolean

        Is true if the stream's buffer has been full and stream will emit 'drain'.

      • readonly writableObjectMode: boolean

        Getter for the property objectMode of a given Writable stream.

      • callback: (error?: null | Error) => void
        ): void;
      • error: null | Error,
        callback: (error?: null | Error) => void
        ): void;
      • callback: (error?: null | Error) => void
        ): void;
      • ): void;
      • size: number
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        ): void;
      • chunk: any,
        encoding: BufferEncoding,
        callback: (error?: null | Error) => void
        ): void;
      • chunks: { chunk: any; encoding: BufferEncoding }[],
        callback: (error?: null | Error) => void
        ): void;
      • [Symbol.asyncDispose](): Promise<void>;

        Calls readable.destroy() with an AbortError and returns a promise that fulfills when the stream is finished.

      • [Symbol.asyncIterator](): AsyncIterator<any>;
      • error: Error,
        event: string | symbol,
        ...args: AnyRest
        ): void;
      • event: 'close',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'data',
        listener: (chunk: any) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'drain',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'end',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'error',
        listener: (err: Error) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'finish',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pause',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'readable',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'resume',
        listener: () => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Event emitter The defined events on documents including:

        1. close
        2. data
        3. drain
        4. end
        5. error
        6. finish
        7. pause
        8. pipe
        9. readable
        10. resume
        11. unpipe
      • options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with chunks of the underlying stream paired with a counter in the form [index, chunk]. The first index value is 0 and it increases by 1 for each chunk produced.

        @returns

        a stream of indexed pairs.

      • compose<T extends ReadableStream>(
        stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,
        options?: { signal: AbortSignal }
        ): T;
      • cork(): void;

        The writable.cork() method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.

        The primary intent of writable.cork() is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork() buffers all the chunks until writable.uncork() is called, which will pass them all to writable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork() without implementing writable._writev() may have an adverse effect on throughput.

        See also: writable.uncork(), writable._writev().

      • error?: Error
        ): this;

        Destroy the stream. Optionally emit an 'error' event, and emit a 'close' event (unless emitClose is set to false). After this call, the readable stream will release any internal resources and subsequent calls to push() will be ignored.

        Once destroy() has been called any further calls will be a no-op and no further errors except from _destroy() may be emitted as 'error'.

        Implementors should not override this method, but instead implement readable._destroy().

        @param error

        Error which will be passed as payload in 'error' event

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks dropped from the start.

        @param limit

        the number of chunks to drop from the readable.

        @returns

        a stream with limit chunks dropped from the start.

      • event: 'close'
        ): boolean;

        Synchronously calls each of the listeners registered for the event named eventName, in the order they were registered, passing the supplied arguments to each.

        Returns true if the event had listeners, false otherwise.

        import { EventEmitter } from 'node:events';
        const myEmitter = new EventEmitter();
        
        // First listener
        myEmitter.on('event', function firstListener() {
          console.log('Helloooo! first listener');
        });
        // Second listener
        myEmitter.on('event', function secondListener(arg1, arg2) {
          console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
        });
        // Third listener
        myEmitter.on('event', function thirdListener(...args) {
          const parameters = args.join(', ');
          console.log(`event with parameters ${parameters} in third listener`);
        });
        
        console.log(myEmitter.listeners('event'));
        
        myEmitter.emit('event', 1, 2, 3, 4, 5);
        
        // Prints:
        // [
        //   [Function: firstListener],
        //   [Function: secondListener],
        //   [Function: thirdListener]
        // ]
        // Helloooo! first listener
        // event with parameters 1, 2 in second listener
        // event with parameters 1, 2, 3, 4, 5 in third listener
        
        event: 'data',
        chunk: any
        ): boolean;
        event: 'drain'
        ): boolean;
        event: 'end'
        ): boolean;
        event: 'error',
        err: Error
        ): boolean;
        event: 'finish'
        ): boolean;
        event: 'pause'
        ): boolean;
        event: 'pipe',
        ): boolean;
        event: 'readable'
        ): boolean;
        event: 'resume'
        ): boolean;
        event: 'unpipe',
        ): boolean;
        event: string | symbol,
        ...args: any[]
        ): boolean;
      • cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        chunk: any,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        chunk: any,
        encoding: BufferEncoding,
        cb?: () => void
        ): this;

        Calling the writable.end() method signals that no more data will be written to the Writable. The optional chunk and encoding arguments allow one final additional chunk of data to be written immediately before closing the stream.

        Calling the write method after calling end will raise an error.

        // Write 'hello, ' and then end with 'world!'.
        import fs from 'node:fs';
        const file = fs.createWriteStream('example.txt');
        file.write('hello, ');
        file.end('world!');
        // Writing more now is not allowed!
        
        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding if chunk is a string

      • eventNames(): string | symbol[];

        Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbols.

        import { EventEmitter } from 'node:events';
        
        const myEE = new EventEmitter();
        myEE.on('foo', () => {});
        myEE.on('bar', () => {});
        
        const sym = Symbol('symbol');
        myEE.on(sym, () => {});
        
        console.log(myEE.eventNames());
        // Prints: [ 'foo', 'bar', Symbol(symbol) ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.every and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunk awaited return value is falsy, the stream is destroyed and the promise is fulfilled with false. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled with true.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for every one of the chunks.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions

        This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be awaited.

        @param fn

        a function to filter chunks from the stream. Async or not.

        @returns

        a stream filtered with the predicate fn.

      • Once the decipher.final() method has been called, the Decipher object can no longer be used to decrypt data. Attempts to call decipher.final() more than once will result in an error being thrown.

        @returns

        Any remaining deciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

        outputEncoding: BufferEncoding
        ): string;

        Once the decipher.final() method has been called, the Decipher object can no longer be used to decrypt data. Attempts to call decipher.final() more than once will result in an error being thrown.

        @param outputEncoding

        The encoding of the return value.

        @returns

        Any remaining deciphered contents. If outputEncoding is specified, a string is returned. If an outputEncoding is not provided, a Buffer is returned.

      • find<T>(
        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => data is T,
        options?: ArrayOptions
        ): Promise<undefined | T>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

        fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<any>;

        This method is similar to Array.prototype.find and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled with undefined.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to the first chunk for which fn evaluated with a truthy value, or undefined if no element was found.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.

        It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.

        @param fn

        a function to map over every chunk in the stream. May be async. May be a stream or generator.

        @returns

        a stream flat-mapped with the function fn.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => void | Promise<void>,
        options?: ArrayOptions
        ): Promise<void>;

        This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be awaited.

        This method is different from for await...of loops in that it can optionally process chunks concurrently. In addition, a forEach iteration can only be stopped by having passed a signal option and aborting the related AbortController while for await...of can be stopped with break or return. In either case the stream will be destroyed.

        This method is different from listening to the 'data' event in that it uses the readable event in the underlying machinary and can limit the number of concurrent fn calls.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise for when the stream has finished.

      • getMaxListeners(): number;

        Returns the current max listener value for the EventEmitter which is either set by emitter.setMaxListeners(n) or defaults to EventEmitter.defaultMaxListeners.

      • isPaused(): boolean;

        The readable.isPaused() method returns the current operating state of the Readable. This is used primarily by the mechanism that underlies the readable.pipe() method. In most typical cases, there will be no reason to use this method directly.

        const readable = new stream.Readable();
        
        readable.isPaused(); // === false
        readable.pause();
        readable.isPaused(); // === true
        readable.resume();
        readable.isPaused(); // === false
        
      • options?: { destroyOnReturn: boolean }
        ): AsyncIterator<any>;

        The iterator created by this method gives users the option to cancel the destruction of the stream if the for await...of loop is exited by return, break, or throw, or if the iterator should destroy the stream if the stream emitted an error during iteration.

      • eventName: string | symbol,
        listener?: Function
        ): number;

        Returns the number of listeners listening for the event named eventName. If listener is provided, it will return how many times the listener is found in the list of the listeners of the event.

        @param eventName

        The name of the event being listened for

        @param listener

        The event handler function

      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        console.log(util.inspect(server.listeners('connection')));
        // Prints: [ [Function] ]
        
      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => any,
        options?: ArrayOptions

        This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be awaited before being passed to the result stream.

        @param fn

        a function to map over every chunk in the stream. Async or not.

        @returns

        a stream mapped with the function fn.

      • off<K>(
        eventName: string | symbol,
        listener: (...args: any[]) => void
        ): this;

        Alias for emitter.removeListener().

      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the end of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.on('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.on('foo', () => console.log('a'));
        myEE.prependListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-time listener function for the event named eventName. The next time eventName is triggered, this listener is removed and then invoked.

        server.once('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener() method can be used as an alternative to add the event listener to the beginning of the listeners array.

        import { EventEmitter } from 'node:events';
        const myEE = new EventEmitter();
        myEE.once('foo', () => console.log('a'));
        myEE.prependOnceListener('foo', () => console.log('b'));
        myEE.emit('foo');
        // Prints:
        //   b
        //   a
        
        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • pause(): this;

        The readable.pause() method will cause a stream in flowing mode to stop emitting 'data' events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.

        const readable = getReadableStreamSomehow();
        readable.on('data', (chunk) => {
          console.log(`Received ${chunk.length} bytes of data.`);
          readable.pause();
          console.log('There will be no additional data for 1 second.');
          setTimeout(() => {
            console.log('Now data will start flowing again.');
            readable.resume();
          }, 1000);
        });
        

        The readable.pause() method has no effect if there is a 'readable' event listener.

      • pipe<T extends WritableStream>(
        destination: T,
        options?: { end: boolean }
        ): T;
      • event: 'close',
        listener: () => void
        ): this;

        Adds the listener function to the beginning of the listeners array for the event named eventName. No checks are made to see if the listener has already been added. Multiple calls passing the same combination of eventName and listener will result in the listener being added, and called, multiple times.

        server.prependListener('connection', (stream) => {
          console.log('someone connected!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • event: 'close',
        listener: () => void
        ): this;

        Adds a one-timelistener function for the event named eventName to the beginning of the listeners array. The next time eventName is triggered, this listener is removed, and then invoked.

        server.prependOnceListener('connection', (stream) => {
          console.log('Ah, we have our first user!');
        });
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        @param listener

        The callback function

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • chunk: any,
        encoding?: BufferEncoding
        ): boolean;
      • eventName: string | symbol
        ): Function[];

        Returns a copy of the array of listeners for the event named eventName, including any wrappers (such as those created by .once()).

        import { EventEmitter } from 'node:events';
        const emitter = new EventEmitter();
        emitter.once('log', () => console.log('log once'));
        
        // Returns a new Array with a function `onceWrapper` which has a property
        // `listener` which contains the original listener bound above
        const listeners = emitter.rawListeners('log');
        const logFnWrapper = listeners[0];
        
        // Logs "log once" to the console and does not unbind the `once` event
        logFnWrapper.listener();
        
        // Logs "log once" to the console and removes the listener
        logFnWrapper();
        
        emitter.on('log', () => console.log('log persistently'));
        // Will return a new Array with a single function bound by `.on()` above
        const newListeners = emitter.rawListeners('log');
        
        // Logs "log persistently" twice
        newListeners[0]();
        emitter.emit('log');
        
      • size?: number
        ): any;

        The readable.read() method reads data out of the internal buffer and returns it. If no data is available to be read, null is returned. By default, the data is returned as a Buffer object unless an encoding has been specified using the readable.setEncoding() method or the stream is operating in object mode.

        The optional size argument specifies a specific number of bytes to read. If size bytes are not available to be read, null will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.

        If the size argument is not specified, all of the data contained in the internal buffer will be returned.

        The size argument must be less than or equal to 1 GiB.

        The readable.read() method should only be called on Readable streams operating in paused mode. In flowing mode, readable.read() is called automatically until the internal buffer is fully drained.

        const readable = getReadableStreamSomehow();
        
        // 'readable' may be triggered multiple times as data is buffered in
        readable.on('readable', () => {
          let chunk;
          console.log('Stream is readable (new data received in buffer)');
          // Use a loop to make sure we read all currently available data
          while (null !== (chunk = readable.read())) {
            console.log(`Read ${chunk.length} bytes of data...`);
          }
        });
        
        // 'end' will be triggered once when there is no more data available
        readable.on('end', () => {
          console.log('Reached end of stream.');
        });
        

        Each call to readable.read() returns a chunk of data, or null. The chunks are not concatenated. A while loop is necessary to consume all data currently in the buffer. When reading a large file .read() may return null, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new 'readable' event will be emitted when there is more data in the buffer. Finally the 'end' event will be emitted when there is no more data to come.

        Therefore to read a file's whole contents from a readable, it is necessary to collect chunks across multiple 'readable' events:

        const chunks = [];
        
        readable.on('readable', () => {
          let chunk;
          while (null !== (chunk = readable.read())) {
            chunks.push(chunk);
          }
        });
        
        readable.on('end', () => {
          const content = chunks.join('');
        });
        

        A Readable stream in object mode will always return a single item from a call to readable.read(size), regardless of the value of the size argument.

        If the readable.read() method returns a chunk of data, a 'data' event will also be emitted.

        Calling read after the 'end' event has been emitted will return null. No runtime error will be raised.

        @param size

        Optional argument to specify how much data to read.

      • reduce<T = any>(
        fn: (previous: any, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial?: undefined,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

        reduce<T = any>(
        fn: (previous: T, data: any, options?: Pick<ArrayOptions, 'signal'>) => T,
        initial: T,
        options?: Pick<ArrayOptions, 'signal'>
        ): Promise<T>;

        This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.

        If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a TypeError with the ERR_INVALID_ARGS code property.

        The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to readable.map method.

        @param fn

        a reducer function to call over every chunk in the stream. Async or not.

        @param initial

        the initial value to use in the reduction.

        @returns

        a promise for the final value of the reduction.

      • eventName?: string | symbol
        ): this;

        Removes all listeners, or those of the specified eventName.

        It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter instance was created by some other component or module (e.g. sockets or file streams).

        Returns a reference to the EventEmitter, so that calls can be chained.

      • event: 'close',
        listener: () => void
        ): this;

        Removes the specified listener from the listener array for the event named eventName.

        const callback = (stream) => {
          console.log('someone connected!');
        };
        server.on('connection', callback);
        // ...
        server.removeListener('connection', callback);
        

        removeListener() will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName, then removeListener() must be called multiple times to remove each instance.

        Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener() or removeAllListeners() calls after emitting and before the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.

        import { EventEmitter } from 'node:events';
        class MyEmitter extends EventEmitter {}
        const myEmitter = new MyEmitter();
        
        const callbackA = () => {
          console.log('A');
          myEmitter.removeListener('event', callbackB);
        };
        
        const callbackB = () => {
          console.log('B');
        };
        
        myEmitter.on('event', callbackA);
        
        myEmitter.on('event', callbackB);
        
        // callbackA removes listener callbackB but it will still be called.
        // Internal listener array at time of emit [callbackA, callbackB]
        myEmitter.emit('event');
        // Prints:
        //   A
        //   B
        
        // callbackB is now removed.
        // Internal listener array [callbackA]
        myEmitter.emit('event');
        // Prints:
        //   A
        

        Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners() method will need to be recreated.

        When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener() will remove the most recently added instance. In the example the once('ping') listener is removed:

        import { EventEmitter } from 'node:events';
        const ee = new EventEmitter();
        
        function pong() {
          console.log('pong');
        }
        
        ee.on('ping', pong);
        ee.once('ping', pong);
        ee.removeListener('ping', pong);
        
        ee.emit('ping');
        ee.emit('ping');
        

        Returns a reference to the EventEmitter, so that calls can be chained.

        event: 'data',
        listener: (chunk: any) => void
        ): this;
        event: 'drain',
        listener: () => void
        ): this;
        event: 'end',
        listener: () => void
        ): this;
        event: 'error',
        listener: (err: Error) => void
        ): this;
        event: 'finish',
        listener: () => void
        ): this;
        event: 'pause',
        listener: () => void
        ): this;
        event: 'pipe',
        listener: (src: Readable) => void
        ): this;
        event: 'readable',
        listener: () => void
        ): this;
        event: 'resume',
        listener: () => void
        ): this;
        event: 'unpipe',
        listener: (src: Readable) => void
        ): this;
        event: string | symbol,
        listener: (...args: any[]) => void
        ): this;
      • resume(): this;

        The readable.resume() method causes an explicitly paused Readable stream to resume emitting 'data' events, switching the stream into flowing mode.

        The readable.resume() method can be used to fully consume the data from a stream without actually processing any of that data:

        getReadableStreamSomehow()
          .resume()
          .on('end', () => {
            console.log('Reached the end, but did not read anything.');
          });
        

        The readable.resume() method has no effect if there is a 'readable' event listener.

      • buffer: ArrayBufferView,
        options?: { plaintextLength: number }
        ): this;
      • buffer: ArrayBufferView
        ): this;
      • auto_padding?: boolean
        ): this;

        When data has been encrypted without standard block padding, calling decipher.setAutoPadding(false) will disable automatic padding to prevent decipher.final() from checking for and removing padding.

        Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.

        The decipher.setAutoPadding() method must be called before decipher.final().

        @returns

        for method chaining.

      • encoding: BufferEncoding
        ): this;

        The writable.setDefaultEncoding() method sets the default encoding for a Writable stream.

        @param encoding

        The new default encoding

      • encoding: BufferEncoding
        ): this;

        The readable.setEncoding() method sets the character encoding for data read from the Readable stream.

        By default, no encoding is assigned and stream data will be returned as Buffer objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than as Buffer objects. For instance, calling readable.setEncoding('utf8') will cause the output data to be interpreted as UTF-8 data, and passed as strings. Calling readable.setEncoding('hex') will cause the data to be encoded in hexadecimal string format.

        The Readable stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream as Buffer objects.

        const readable = getReadableStreamSomehow();
        readable.setEncoding('utf8');
        readable.on('data', (chunk) => {
          assert.equal(typeof chunk, 'string');
          console.log('Got %d characters of string data:', chunk.length);
        });
        
        @param encoding

        The encoding to use.

      • n: number
        ): this;

        By default EventEmitters will print a warning if more than 10 listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners() method allows the limit to be modified for this specific EventEmitter instance. The value can be set to Infinity (or 0) to indicate an unlimited number of listeners.

        Returns a reference to the EventEmitter, so that calls can be chained.

      • fn: (data: any, options?: Pick<ArrayOptions, 'signal'>) => boolean | Promise<boolean>,
        options?: ArrayOptions
        ): Promise<boolean>;

        This method is similar to Array.prototype.some and calls fn on each chunk in the stream until the awaited return value is true (or any truthy value). Once an fn call on a chunk awaited return value is truthy, the stream is destroyed and the promise is fulfilled with true. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled with false.

        @param fn

        a function to call on each chunk of the stream. Async or not.

        @returns

        a promise evaluating to true if fn returned a truthy value for at least one of the chunks.

      • limit: number,
        options?: Pick<ArrayOptions, 'signal'>

        This method returns a new stream with the first limit chunks.

        @param limit

        the number of chunks to take from the readable.

        @returns

        a stream with limit chunks taken.

      • options?: Pick<ArrayOptions, 'signal'>
        ): Promise<any[]>;

        This method allows easily obtaining the contents of a stream.

        As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.

        @returns

        a promise containing an array with the contents of the stream.

      • uncork(): void;

        The writable.uncork() method flushes all data buffered since cork was called.

        When using writable.cork() and writable.uncork() to manage the buffering of writes to a stream, defer calls to writable.uncork() using process.nextTick(). Doing so allows batching of all writable.write() calls that occur within a given Node.js event loop phase.

        stream.cork();
        stream.write('some ');
        stream.write('data ');
        process.nextTick(() => stream.uncork());
        

        If the writable.cork() method is called multiple times on a stream, the same number of calls to writable.uncork() must be called to flush the buffered data.

        stream.cork();
        stream.write('some ');
        stream.cork();
        stream.write('data ');
        process.nextTick(() => {
          stream.uncork();
          // The data will not be flushed until uncork() is called a second time.
          stream.uncork();
        });
        

        See also: writable.cork().

      • destination?: WritableStream
        ): this;

        The readable.unpipe() method detaches a Writable stream previously attached using the pipe method.

        If the destination is not specified, then all pipes are detached.

        If the destination is specified, but no pipe is set up for it, then the method does nothing.

        import fs from 'node:fs';
        const readable = getReadableStreamSomehow();
        const writable = fs.createWriteStream('file.txt');
        // All the data from readable goes into 'file.txt',
        // but only for the first second.
        readable.pipe(writable);
        setTimeout(() => {
          console.log('Stop writing to file.txt.');
          readable.unpipe(writable);
          console.log('Manually close the file stream.');
          writable.end();
        }, 1000);
        
        @param destination

        Optional specific stream to unpipe

      • chunk: any,
        encoding?: BufferEncoding
        ): void;

        Passing chunk as null signals the end of the stream (EOF) and behaves the same as readable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.

        The readable.unshift() method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.

        The stream.unshift(chunk) method cannot be called after the 'end' event has been emitted or a runtime error will be thrown.

        Developers using stream.unshift() often should consider switching to use of a Transform stream instead. See the API for stream implementers section for more information.

        // Pull off a header delimited by \n\n.
        // Use unshift() if we get too much.
        // Call the callback with (error, header, stream).
        import { StringDecoder } from 'node:string_decoder';
        function parseHeader(stream, callback) {
          stream.on('error', callback);
          stream.on('readable', onReadable);
          const decoder = new StringDecoder('utf8');
          let header = '';
          function onReadable() {
            let chunk;
            while (null !== (chunk = stream.read())) {
              const str = decoder.write(chunk);
              if (str.includes('\n\n')) {
                // Found the header boundary.
                const split = str.split(/\n\n/);
                header += split.shift();
                const remaining = split.join('\n\n');
                const buf = Buffer.from(remaining, 'utf8');
                stream.removeListener('error', callback);
                // Remove the 'readable' listener before unshifting.
                stream.removeListener('readable', onReadable);
                if (buf.length)
                  stream.unshift(buf);
                // Now the body of the message can be read from the stream.
                callback(null, header, stream);
                return;
              }
              // Still reading the header.
              header += str;
            }
          }
        }
        

        Unlike push, stream.unshift(chunk) will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results if readable.unshift() is called during a read (i.e. from within a _read implementation on a custom stream). Following the call to readable.unshift() with an immediate push will reset the reading state appropriately, however it is best to simply avoid calling readable.unshift() while in the process of performing a read.

        @param chunk

        Chunk of data to unshift onto the read queue. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray}, {DataView} or null. For object mode streams, chunk may be any JavaScript value.

        @param encoding

        Encoding of string chunks. Must be a valid Buffer encoding, such as 'utf8' or 'ascii'.

      • data: ArrayBufferView
        ): Buffer;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        data: string,
        inputEncoding: Encoding
        ): Buffer;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data string.

        data: ArrayBufferView,
        inputEncoding: undefined,
        outputEncoding: Encoding
        ): string;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data string.

        @param outputEncoding

        The encoding of the return value.

        data: string,
        inputEncoding: undefined | Encoding,
        outputEncoding: Encoding
        ): string;

        Updates the decipher with data. If the inputEncoding argument is given, the data argument is a string using the specified encoding. If the inputEncoding argument is not given, data must be a Buffer. If data is a Buffer then inputEncoding is ignored.

        The outputEncoding specifies the output format of the enciphered data. If the outputEncoding is specified, a string using the specified encoding is returned. If no outputEncoding is provided, a Buffer is returned.

        The decipher.update() method can be called multiple times with new data until decipher.final() is called. Calling decipher.update() after decipher.final() will result in an error being thrown.

        @param inputEncoding

        The encoding of the data string.

        @param outputEncoding

        The encoding of the return value.

      • stream: ReadableStream
        ): this;

        Prior to Node.js 0.10, streams did not implement the entire node:stream module API as it is currently defined. (See Compatibility for more information.)

        When using an older Node.js library that emits 'data' events and has a pause method that is advisory only, the readable.wrap() method can be used to create a Readable stream that uses the old stream as its data source.

        It will rarely be necessary to use readable.wrap() but the method has been provided as a convenience for interacting with older Node.js applications and libraries.

        import { OldReader } from './old-api-module.js';
        import { Readable } from 'node:stream';
        const oreader = new OldReader();
        const myReader = new Readable().wrap(oreader);
        
        myReader.on('readable', () => {
          myReader.read(); // etc.
        });
        
        @param stream

        An "old style" readable stream

      • chunk: any,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

        chunk: any,
        encoding: BufferEncoding,
        callback?: (error: undefined | null | Error) => void
        ): boolean;

        The writable.write() method writes some data to the stream, and calls the supplied callback once the data has been fully handled. If an error occurs, the callback will be called with the error as its first argument. The callback is called asynchronously and before 'error' is emitted.

        The return value is true if the internal buffer is less than the highWaterMark configured when the stream was created after admitting chunk. If false is returned, further attempts to write data to the stream should stop until the 'drain' event is emitted.

        While a stream is not draining, calls to write() will buffer chunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain' event will be emitted. Once write() returns false, do not write more chunks until the 'drain' event is emitted. While calling write() on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.

        Writing data while the stream is not draining is particularly problematic for a Transform, because the Transform streams are paused by default until they are piped or a 'data' or 'readable' event handler is added.

        If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable and use pipe. However, if calling write() is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain' event:

        function write(data, cb) {
          if (!stream.write(data)) {
            stream.once('drain', cb);
          } else {
            process.nextTick(cb);
          }
        }
        
        // Wait for cb to be called before doing any other write.
        write('hello', () => {
          console.log('Write completed, do more writes now.');
        });
        

        A Writable stream in object mode will always ignore the encoding argument.

        @param chunk

        Optional data to write. For streams not operating in object mode, chunk must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk may be any JavaScript value other than null.

        @param encoding

        The encoding, if chunk is a string.

        @param callback

        Callback for when this chunk of data is flushed.

        @returns

        false if the stream wishes for the calling code to wait for the 'drain' event to be emitted before continuing to write additional data; otherwise true.

    • interface DiffieHellmanGroupConstructor

    • interface DSAKeyPairKeyObjectOptions

    • interface DSAKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>

    • interface ECKeyPairKeyObjectOptions

      • namedCurve: string

        Name of the curve to use

      • paramEncoding?: 'explicit' | 'named'

        Must be 'named' or 'explicit'. Default: 'named'.

    • interface ECKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>

    • interface ED25519KeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>

    • interface ED448KeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>

    • interface HashOptions

    • interface JsonWebKey

      • crv?: string
      • d?: string
      • dp?: string
      • dq?: string
      • e?: string
      • k?: string
      • kty?: string
      • n?: string
      • p?: string
      • q?: string
      • qi?: string
      • x?: string
      • y?: string
    • interface JwkKeyExportOptions

    • interface KeyExportOptions<T extends KeyFormat>

    • interface KeyPairSyncResult<T1 extends string | Buffer, T2 extends string | Buffer>

    • interface PrivateKeyInput

    • interface PublicKeyInput

    • interface RandomUUIDOptions

      • disableEntropyCache?: boolean

        By default, to improve performance, Node.js will pre-emptively generate and persistently cache enough random data to generate up to 128 random UUIDs. To generate a UUID without using the cache, set disableEntropyCache to true.

    • interface RSAKeyPairKeyObjectOptions

    • interface RSAKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>

    • interface RsaPrivateKey

    • interface RSAPSSKeyPairKeyObjectOptions

    • interface RSAPSSKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>

    • interface RsaPublicKey

    • interface ScryptOptions

    • interface SecureHeapUsage

      • min: number

        The minimum allocation from the secure heap as specified using the --secure-heap-min command-line flag.

      • total: number

        The total allocated secure heap size as specified using the --secure-heap=n command-line flag.

      • used: number

        The total number of bytes currently allocated from the secure heap.

      • utilization: number

        The calculated ratio of used to total allocated bytes.

    • interface SignPrivateKeyInput

    • interface VerifyPublicKeyInput

    • interface X25519KeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>

    • interface X448KeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>

    • interface X509CheckOptions

    • type BinaryLike = string | NodeJS.ArrayBufferView
    • type BinaryToTextEncoding = 'base64' | 'base64url' | 'hex' | 'binary'
    • type CharacterEncoding = 'utf8' | 'utf-8' | 'utf16le' | 'utf-16le' | 'latin1'
    • type CipherCCMTypes = 'aes-128-ccm' | 'aes-192-ccm' | 'aes-256-ccm'
    • type CipherChaCha20Poly1305Types = 'chacha20-poly1305'
    • type CipherGCMTypes = 'aes-128-gcm' | 'aes-192-gcm' | 'aes-256-gcm'
    • type CipherMode = 'cbc' | 'ccm' | 'cfb' | 'ctr' | 'ecb' | 'gcm' | 'ocb' | 'ofb' | 'stream' | 'wrap' | 'xts'
    • type CipherOCBTypes = 'aes-128-ocb' | 'aes-192-ocb' | 'aes-256-ocb'
    • type DiffieHellmanGroup = Omit<DiffieHellman, 'setPublicKey' | 'setPrivateKey'>
    • type DSAEncoding = 'der' | 'ieee-p1363'
    • type ECDHKeyFormat = 'compressed' | 'uncompressed' | 'hybrid'
    • type KeyFormat = 'pem' | 'der' | 'jwk'
    • type KeyLike = string | Buffer | KeyObject
    • type KeyObjectType = 'secret' | 'public' | 'private'
    • type KeyType = 'rsa' | 'rsa-pss' | 'dsa' | 'ec' | 'ed25519' | 'ed448' | 'x25519' | 'x448'
    • type LargeNumberLike = NodeJS.ArrayBufferView | SharedArrayBuffer | ArrayBuffer | bigint
    • type LegacyCharacterEncoding = 'ascii' | 'binary' | 'ucs2' | 'ucs-2'
    • type UUID = `${string}-${string}-${string}-${string}-${string}`