Specifies the active default cipher list used by the current Node.js process (colon-separated values).
Node.js module
crypto
The 'node:crypto' module provides cryptographic functionality, including wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, verify, and key derivation functions.
It supports common algorithms such as SHA-256, AES, RSA, ECDH, and more. The module also offers secure random number generation, key management, and certificate handling, making it essential for implementing secure protocols and data encryption.
Works in Bun
Most crypto functionality is implemented, but some specific methods related to engine configuration, FIPS mode, and secure heap usage are missing.
namespace constants
Specifies the built-in default cipher list used by Node.js (colon-separated values).
Causes the salt length for RSA_PKCS1_PSS_PADDING to be determined automatically when verifying a signature.
Sets the salt length for RSA_PKCS1_PSS_PADDING to the digest size when signing or verifying.
Sets the salt length for RSA_PKCS1_PSS_PADDING to the maximum permissible value when signing data.
Applies multiple bug workarounds within OpenSSL. See https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_options.html for detail.
Instructs OpenSSL to allow a non-[EC]DHE-based key exchange mode for TLS v1.3
Allows legacy insecure renegotiation between OpenSSL and unpatched clients or servers. See https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_options.html.
Attempts to use the server's preferences instead of the client's when selecting a cipher. See https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_options.html.
Instructs OpenSSL to use Cisco's version identifier of DTLS_BAD_VER.
Instructs OpenSSL to turn on cookie exchange.
Instructs OpenSSL to add server-hello extension from an early version of the cryptopro draft.
Instructs OpenSSL to disable a SSL 3.0/TLS 1.0 vulnerability workaround added in OpenSSL 0.9.6d.
Allows initial connection to servers that do not support RI.
Instructs OpenSSL to disable support for SSL/TLS compression.
Instructs OpenSSL to disable encrypt-then-MAC.
Instructs OpenSSL to disable renegotiation.
Instructs OpenSSL to always start a new session when performing renegotiation.
Instructs OpenSSL to turn off SSL v2
Instructs OpenSSL to turn off SSL v3
Instructs OpenSSL to disable use of RFC4507bis tickets.
Instructs OpenSSL to turn off TLS v1
Instructs OpenSSL to turn off TLS v1.1
Instructs OpenSSL to turn off TLS v1.2
Instructs OpenSSL to turn off TLS v1.3
Instructs OpenSSL server to prioritize ChaCha20-Poly1305 when the client does. This option has no effect if
SSL_OP_CIPHER_SERVER_PREFERENCEis not enabled.Instructs OpenSSL to disable version rollback attack detection.
class Cipheriv
Instances of the
Cipherivclass are used to encrypt data. The class can be used in one of two ways:- As a
streamthat is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or - Using the
cipher.update()andcipher.final()methods to produce the encrypted data.
The createCipheriv method is used to create
Cipherivinstances.Cipherivobjects are not to be created directly using thenewkeyword.Example: Using
Cipherivobjects as streams:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; // Once we have the key and iv, we can create and use the cipher... const cipher = createCipheriv(algorithm, key, iv); let encrypted = ''; cipher.setEncoding('hex'); cipher.on('data', (chunk) => encrypted += chunk); cipher.on('end', () => console.log(encrypted)); cipher.write('some clear text data'); cipher.end(); }); });Example: Using
Cipherivand piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { pipeline, } from 'node:stream'; const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); const input = createReadStream('test.js'); const output = createWriteStream('test.enc'); pipeline(input, cipher, output, (err) => { if (err) throw err; }); }); });Example: Using the
cipher.update()andcipher.final()methods:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); let encrypted = cipher.update('some clear text data', 'utf8', 'hex'); encrypted += cipher.final('hex'); console.log(encrypted); }); });- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. - static captureRejections: boolean
Value: boolean
Change the default
captureRejectionsoption on all newEventEmitterobjects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')See how to write a custom
rejection handler. - static defaultMaxListeners: number
By default, a maximum of
10listeners can be registered for any single event. This limit can be changed for individualEventEmitterinstances using theemitter.setMaxListeners(n)method. To change the default for allEventEmitterinstances, theevents.defaultMaxListenersproperty can be used. If this value is not a positive number, aRangeErroris thrown.Take caution when setting the
events.defaultMaxListenersbecause the change affects allEventEmitterinstances, including those created before the change is made. However, callingemitter.setMaxListeners(n)still has precedence overevents.defaultMaxListeners.This is not a hard limit. The
EventEmitterinstance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter, theemitter.getMaxListeners()andemitter.setMaxListeners()methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });The
--trace-warningscommand-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')and will have the additionalemitter,type, andcountproperties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsnameproperty is set to'MaxListenersExceededWarning'. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'events. Listeners installed using this symbol are called before the regular'error'listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'event is emitted. Therefore, the process will still crash if no regular'error'listener is installed. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
cipher.final()method has been called, theCipherivobject can no longer be used to encrypt data. Attempts to callcipher.final()more than once will result in an error being thrown.@returnsAny remaining enciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
cipher.final()method has been called, theCipherivobject can no longer be used to encrypt data. Attempts to callcipher.final()more than once will result in an error being thrown.@param outputEncodingThe
encodingof the return value.@returnsAny remaining enciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- autoPadding?: boolean): this;
When using block encryption algorithms, the
Cipherivclass will automatically add padding to the input data to the appropriate block size. To disable the default padding callcipher.setAutoPadding(false).When
autoPaddingisfalse, the length of the entire input data must be a multiple of the cipher's block size orcipher.final()will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using0x0instead of PKCS padding.The
cipher.setAutoPadding()method must be called beforecipher.final().@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - ): NonSharedBuffer;
Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.data: string,): NonSharedBuffer;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.@param outputEncodingThe
encodingof the return value.data: string,): string;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.@param outputEncodingThe
encodingof the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue. - ): Disposable;
Listens once to the
abortevent on the providedsignal.Listening to the
abortevent on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignals in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagationdoes not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }@returnsDisposable that removes the
abortlistener. - src: string | Object | Stream | ArrayBuffer | Blob | Promise<any> | Iterable<any, any, any> | AsyncIterable<any, any, any> | AsyncGeneratorFunction
A utility method for creating duplex streams.
Streamconverts writable stream into writableDuplexand readable stream toDuplex.Blobconverts into readableDuplex.stringconverts into readableDuplex.ArrayBufferconverts into readableDuplex.AsyncIterableconverts into a readableDuplex. Cannot yieldnull.AsyncGeneratorFunctionconverts into a readable/writable transformDuplex. Must take a sourceAsyncIterableas first parameter. Cannot yieldnull.AsyncFunctionconverts into a writableDuplex. Must return eithernullorundefinedObject ({ writable, readable })convertsreadableandwritableintoStreamand then combines them intoDuplexwhere theDuplexwill write to thewritableand read from thereadable.Promiseconverts into readableDuplex. Valuenullis ignored.
- options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
A utility method for creating a
Duplexfrom a webReadableStreamandWritableStream. - name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.For
EventEmitters this behaves exactly the same as calling.listenerson the emitter.For
EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] } - ): number;
Returns the currently set max amount of listeners.
For
EventEmitters this behaves exactly the same as calling.getMaxListenerson the emitter.For
EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 } - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemittereventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemitter - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled! - n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);@param nA non-negative number. The maximum number of listeners per
EventTargetevent.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
nis set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
ReadableStreamandWritableStreamfrom aDuplex.
- As a
class Decipheriv
Instances of the
Decipherivclass are used to decrypt data. The class can be used in one of two ways:- As a
streamthat is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or - Using the
decipher.update()anddecipher.final()methods to produce the unencrypted data.
The createDecipheriv method is used to create
Decipherivinstances.Decipherivobjects are not to be created directly using thenewkeyword.Example: Using
Decipherivobjects as streams:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Key length is dependent on the algorithm. In this case for aes192, it is // 24 bytes (192 bits). // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); let decrypted = ''; decipher.on('readable', () => { let chunk; while (null !== (chunk = decipher.read())) { decrypted += chunk.toString('utf8'); } }); decipher.on('end', () => { console.log(decrypted); // Prints: some clear text data }); // Encrypted with same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; decipher.write(encrypted, 'hex'); decipher.end();Example: Using
Decipherivand piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); const input = createReadStream('test.enc'); const output = createWriteStream('test.js'); input.pipe(decipher).pipe(output);Example: Using the
decipher.update()anddecipher.final()methods:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); // Encrypted using same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; let decrypted = decipher.update(encrypted, 'hex', 'utf8'); decrypted += decipher.final('utf8'); console.log(decrypted); // Prints: some clear text data- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. - static captureRejections: boolean
Value: boolean
Change the default
captureRejectionsoption on all newEventEmitterobjects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')See how to write a custom
rejection handler. - static defaultMaxListeners: number
By default, a maximum of
10listeners can be registered for any single event. This limit can be changed for individualEventEmitterinstances using theemitter.setMaxListeners(n)method. To change the default for allEventEmitterinstances, theevents.defaultMaxListenersproperty can be used. If this value is not a positive number, aRangeErroris thrown.Take caution when setting the
events.defaultMaxListenersbecause the change affects allEventEmitterinstances, including those created before the change is made. However, callingemitter.setMaxListeners(n)still has precedence overevents.defaultMaxListeners.This is not a hard limit. The
EventEmitterinstance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter, theemitter.getMaxListeners()andemitter.setMaxListeners()methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });The
--trace-warningscommand-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')and will have the additionalemitter,type, andcountproperties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsnameproperty is set to'MaxListenersExceededWarning'. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'events. Listeners installed using this symbol are called before the regular'error'listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'event is emitted. Therefore, the process will still crash if no regular'error'listener is installed. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
decipher.final()method has been called, theDecipherivobject can no longer be used to decrypt data. Attempts to calldecipher.final()more than once will result in an error being thrown.@returnsAny remaining deciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
decipher.final()method has been called, theDecipherivobject can no longer be used to decrypt data. Attempts to calldecipher.final()more than once will result in an error being thrown.@param outputEncodingThe
encodingof the return value.@returnsAny remaining deciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- auto_padding?: boolean): this;
When data has been encrypted without standard block padding, calling
decipher.setAutoPadding(false)will disable automatic padding to preventdecipher.final()from checking for and removing padding.Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.
The
decipher.setAutoPadding()method must be called beforedecipher.final().@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - data: ArrayBufferView): NonSharedBuffer;
Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.data: string,): NonSharedBuffer;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.@param outputEncodingThe
encodingof the return value.data: string,): string;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.@param outputEncodingThe
encodingof the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue. - ): Disposable;
Listens once to the
abortevent on the providedsignal.Listening to the
abortevent on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignals in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagationdoes not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }@returnsDisposable that removes the
abortlistener. - src: string | Object | Stream | ArrayBuffer | Blob | Promise<any> | Iterable<any, any, any> | AsyncIterable<any, any, any> | AsyncGeneratorFunction
A utility method for creating duplex streams.
Streamconverts writable stream into writableDuplexand readable stream toDuplex.Blobconverts into readableDuplex.stringconverts into readableDuplex.ArrayBufferconverts into readableDuplex.AsyncIterableconverts into a readableDuplex. Cannot yieldnull.AsyncGeneratorFunctionconverts into a readable/writable transformDuplex. Must take a sourceAsyncIterableas first parameter. Cannot yieldnull.AsyncFunctionconverts into a writableDuplex. Must return eithernullorundefinedObject ({ writable, readable })convertsreadableandwritableintoStreamand then combines them intoDuplexwhere theDuplexwill write to thewritableand read from thereadable.Promiseconverts into readableDuplex. Valuenullis ignored.
- options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
A utility method for creating a
Duplexfrom a webReadableStreamandWritableStream. - name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.For
EventEmitters this behaves exactly the same as calling.listenerson the emitter.For
EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] } - ): number;
Returns the currently set max amount of listeners.
For
EventEmitters this behaves exactly the same as calling.getMaxListenerson the emitter.For
EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 } - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemittereventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemitter - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled! - n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);@param nA non-negative number. The maximum number of listeners per
EventTargetevent.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
nis set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
ReadableStreamandWritableStreamfrom aDuplex.
- As a
class DiffieHellman
The
DiffieHellmanclass is a utility for creating Diffie-Hellman key exchanges.Instances of the
DiffieHellmanclass can be created using the createDiffieHellman function.import assert from 'node:assert'; const { createDiffieHellman, } = await import('node:crypto'); // Generate Alice's keys... const alice = createDiffieHellman(2048); const aliceKey = alice.generateKeys(); // Generate Bob's keys... const bob = createDiffieHellman(alice.getPrime(), alice.getGenerator()); const bobKey = bob.generateKeys(); // Exchange and generate the secret... const aliceSecret = alice.computeSecret(bobKey); const bobSecret = bob.computeSecret(aliceKey); // OK assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('hex'));- verifyError: number
A bit field containing any warnings and/or errors resulting from a check performed during initialization of the
DiffieHellmanobject.The following values are valid for this property (as defined in
node:constantsmodule):DH_CHECK_P_NOT_SAFE_PRIMEDH_CHECK_P_NOT_PRIMEDH_UNABLE_TO_CHECK_GENERATORDH_NOT_SUITABLE_GENERATOR
- otherPublicKey: ArrayBufferView,inputEncoding?: null,outputEncoding?: null): NonSharedBuffer;
Computes the shared secret using
otherPublicKeyas the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specifiedinputEncoding, and secret is encoded using specifiedoutputEncoding. If theinputEncodingis not provided,otherPublicKeyis expected to be aBuffer,TypedArray, orDataView.If
outputEncodingis given a string is returned; otherwise, aBufferis returned.@param inputEncodingThe
encodingof anotherPublicKeystring.@param outputEncodingThe
encodingof the return value.otherPublicKey: string,outputEncoding?: null): NonSharedBuffer;Computes the shared secret using
otherPublicKeyas the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specifiedinputEncoding, and secret is encoded using specifiedoutputEncoding. If theinputEncodingis not provided,otherPublicKeyis expected to be aBuffer,TypedArray, orDataView.If
outputEncodingis given a string is returned; otherwise, aBufferis returned.@param inputEncodingThe
encodingof anotherPublicKeystring.@param outputEncodingThe
encodingof the return value.otherPublicKey: ArrayBufferView,inputEncoding: null,): string;Computes the shared secret using
otherPublicKeyas the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specifiedinputEncoding, and secret is encoded using specifiedoutputEncoding. If theinputEncodingis not provided,otherPublicKeyis expected to be aBuffer,TypedArray, orDataView.If
outputEncodingis given a string is returned; otherwise, aBufferis returned.@param inputEncodingThe
encodingof anotherPublicKeystring.@param outputEncodingThe
encodingof the return value.otherPublicKey: string,): string;Computes the shared secret using
otherPublicKeyas the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specifiedinputEncoding, and secret is encoded using specifiedoutputEncoding. If theinputEncodingis not provided,otherPublicKeyis expected to be aBuffer,TypedArray, orDataView.If
outputEncodingis given a string is returned; otherwise, aBufferis returned.@param inputEncodingThe
encodingof anotherPublicKeystring.@param outputEncodingThe
encodingof the return value. Generates private and public Diffie-Hellman key values unless they have been generated or computed already, and returns the public key in the specified
encoding. This key should be transferred to the other party. Ifencodingis provided a string is returned; otherwise aBufferis returned.This function is a thin wrapper around
DH_generate_key(). In particular, once a private key has been generated or set, calling this function only updates the public key but does not generate a new private key.): string;Generates private and public Diffie-Hellman key values unless they have been generated or computed already, and returns the public key in the specified
encoding. This key should be transferred to the other party. Ifencodingis provided a string is returned; otherwise aBufferis returned.This function is a thin wrapper around
DH_generate_key(). In particular, once a private key has been generated or set, calling this function only updates the public key but does not generate a new private key.@param encodingThe
encodingof the return value.Returns the Diffie-Hellman generator in the specified
encoding. Ifencodingis provided a string is returned; otherwise aBufferis returned.): string;Returns the Diffie-Hellman generator in the specified
encoding. Ifencodingis provided a string is returned; otherwise aBufferis returned.@param encodingThe
encodingof the return value.Returns the Diffie-Hellman prime in the specified
encoding. Ifencodingis provided a string is returned; otherwise aBufferis returned.): string;Returns the Diffie-Hellman prime in the specified
encoding. Ifencodingis provided a string is returned; otherwise aBufferis returned.@param encodingThe
encodingof the return value.Returns the Diffie-Hellman private key in the specified
encoding. Ifencodingis provided a string is returned; otherwise aBufferis returned.): string;Returns the Diffie-Hellman private key in the specified
encoding. Ifencodingis provided a string is returned; otherwise aBufferis returned.@param encodingThe
encodingof the return value.Returns the Diffie-Hellman public key in the specified
encoding. Ifencodingis provided a string is returned; otherwise aBufferis returned.): string;Returns the Diffie-Hellman public key in the specified
encoding. Ifencodingis provided a string is returned; otherwise aBufferis returned.@param encodingThe
encodingof the return value.- privateKey: ArrayBufferView): void;
Sets the Diffie-Hellman private key. If the
encodingargument is provided,privateKeyis expected to be a string. If noencodingis provided,privateKeyis expected to be aBuffer,TypedArray, orDataView.This function does not automatically compute the associated public key. Either
diffieHellman.setPublicKey()ordiffieHellman.generateKeys()can be used to manually provide the public key or to automatically derive it.privateKey: string,encoding: BufferEncoding): void;Sets the Diffie-Hellman private key. If the
encodingargument is provided,privateKeyis expected to be a string. If noencodingis provided,privateKeyis expected to be aBuffer,TypedArray, orDataView.This function does not automatically compute the associated public key. Either
diffieHellman.setPublicKey()ordiffieHellman.generateKeys()can be used to manually provide the public key or to automatically derive it.@param encodingThe
encodingof theprivateKeystring. - publicKey: ArrayBufferView): void;
Sets the Diffie-Hellman public key. If the
encodingargument is provided,publicKeyis expected to be a string. If noencodingis provided,publicKeyis expected to be aBuffer,TypedArray, orDataView.publicKey: string,encoding: BufferEncoding): void;Sets the Diffie-Hellman public key. If the
encodingargument is provided,publicKeyis expected to be a string. If noencodingis provided,publicKeyis expected to be aBuffer,TypedArray, orDataView.@param encodingThe
encodingof thepublicKeystring.
class ECDH
The
ECDHclass is a utility for creating Elliptic Curve Diffie-Hellman (ECDH) key exchanges.Instances of the
ECDHclass can be created using the createECDH function.import assert from 'node:assert'; const { createECDH, } = await import('node:crypto'); // Generate Alice's keys... const alice = createECDH('secp521r1'); const aliceKey = alice.generateKeys(); // Generate Bob's keys... const bob = createECDH('secp521r1'); const bobKey = bob.generateKeys(); // Exchange and generate the secret... const aliceSecret = alice.computeSecret(bobKey); const bobSecret = bob.computeSecret(aliceKey); assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('hex')); // OK- otherPublicKey: ArrayBufferView): NonSharedBuffer;
Computes the shared secret using
otherPublicKeyas the other party's public key and returns the computed shared secret. The supplied key is interpreted using specifiedinputEncoding, and the returned secret is encoded using the specifiedoutputEncoding. If theinputEncodingis not provided,otherPublicKeyis expected to be aBuffer,TypedArray, orDataView.If
outputEncodingis given a string will be returned; otherwise aBufferis returned.ecdh.computeSecretwill throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEYerror whenotherPublicKeylies outside of the elliptic curve. SinceotherPublicKeyis usually supplied from a remote user over an insecure network, be sure to handle this exception accordingly.otherPublicKey: string,): NonSharedBuffer;Computes the shared secret using
otherPublicKeyas the other party's public key and returns the computed shared secret. The supplied key is interpreted using specifiedinputEncoding, and the returned secret is encoded using the specifiedoutputEncoding. If theinputEncodingis not provided,otherPublicKeyis expected to be aBuffer,TypedArray, orDataView.If
outputEncodingis given a string will be returned; otherwise aBufferis returned.ecdh.computeSecretwill throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEYerror whenotherPublicKeylies outside of the elliptic curve. SinceotherPublicKeyis usually supplied from a remote user over an insecure network, be sure to handle this exception accordingly.@param inputEncodingThe
encodingof theotherPublicKeystring.otherPublicKey: ArrayBufferView,): string;Computes the shared secret using
otherPublicKeyas the other party's public key and returns the computed shared secret. The supplied key is interpreted using specifiedinputEncoding, and the returned secret is encoded using the specifiedoutputEncoding. If theinputEncodingis not provided,otherPublicKeyis expected to be aBuffer,TypedArray, orDataView.If
outputEncodingis given a string will be returned; otherwise aBufferis returned.ecdh.computeSecretwill throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEYerror whenotherPublicKeylies outside of the elliptic curve. SinceotherPublicKeyis usually supplied from a remote user over an insecure network, be sure to handle this exception accordingly.@param outputEncodingThe
encodingof the return value.otherPublicKey: string,): string;Computes the shared secret using
otherPublicKeyas the other party's public key and returns the computed shared secret. The supplied key is interpreted using specifiedinputEncoding, and the returned secret is encoded using the specifiedoutputEncoding. If theinputEncodingis not provided,otherPublicKeyis expected to be aBuffer,TypedArray, orDataView.If
outputEncodingis given a string will be returned; otherwise aBufferis returned.ecdh.computeSecretwill throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEYerror whenotherPublicKeylies outside of the elliptic curve. SinceotherPublicKeyis usually supplied from a remote user over an insecure network, be sure to handle this exception accordingly.@param inputEncodingThe
encodingof theotherPublicKeystring.@param outputEncodingThe
encodingof the return value. Generates private and public EC Diffie-Hellman key values, and returns the public key in the specified
formatandencoding. This key should be transferred to the other party.The
formatargument specifies point encoding and can be'compressed'or'uncompressed'. Ifformatis not specified, the point will be returned in'uncompressed'format.If
encodingis provided a string is returned; otherwise aBufferis returned.): string;Generates private and public EC Diffie-Hellman key values, and returns the public key in the specified
formatandencoding. This key should be transferred to the other party.The
formatargument specifies point encoding and can be'compressed'or'uncompressed'. Ifformatis not specified, the point will be returned in'uncompressed'format.If
encodingis provided a string is returned; otherwise aBufferis returned.@param encodingThe
encodingof the return value.If
encodingis specified, a string is returned; otherwise aBufferis returned.@returnsThe EC Diffie-Hellman in the specified
encoding.): string;If
encodingis specified, a string is returned; otherwise aBufferis returned.@param encodingThe
encodingof the return value.@returnsThe EC Diffie-Hellman in the specified
encoding.- encoding?: null,): NonSharedBuffer;
The
formatargument specifies point encoding and can be'compressed'or'uncompressed'. Ifformatis not specified the point will be returned in'uncompressed'format.If
encodingis specified, a string is returned; otherwise aBufferis returned.@param encodingThe
encodingof the return value.@returnsThe EC Diffie-Hellman public key in the specified
encodingandformat.): string;The
formatargument specifies point encoding and can be'compressed'or'uncompressed'. Ifformatis not specified the point will be returned in'uncompressed'format.If
encodingis specified, a string is returned; otherwise aBufferis returned.@param encodingThe
encodingof the return value.@returnsThe EC Diffie-Hellman public key in the specified
encodingandformat. - privateKey: ArrayBufferView): void;
Sets the EC Diffie-Hellman private key. If
encodingis provided,privateKeyis expected to be a string; otherwiseprivateKeyis expected to be aBuffer,TypedArray, orDataView.If
privateKeyis not valid for the curve specified when theECDHobject was created, an error is thrown. Upon setting the private key, the associated public point (key) is also generated and set in theECDHobject.privateKey: string,): void;Sets the EC Diffie-Hellman private key. If
encodingis provided,privateKeyis expected to be a string; otherwiseprivateKeyis expected to be aBuffer,TypedArray, orDataView.If
privateKeyis not valid for the curve specified when theECDHobject was created, an error is thrown. Upon setting the private key, the associated public point (key) is also generated and set in theECDHobject.@param encodingThe
encodingof theprivateKeystring. - curve: string,outputEncoding?: 'latin1' | 'base64' | 'base64url' | 'hex',format?: 'uncompressed' | 'compressed' | 'hybrid'): string | NonSharedBuffer;
Converts the EC Diffie-Hellman public key specified by
keyandcurveto the format specified byformat. Theformatargument specifies point encoding and can be'compressed','uncompressed'or'hybrid'. The supplied key is interpreted using the specifiedinputEncoding, and the returned key is encoded using the specifiedoutputEncoding.Use getCurves to obtain a list of available curve names. On recent OpenSSL releases,
openssl ecparam -list_curveswill also display the name and description of each available elliptic curve.If
formatis not specified the point will be returned in'uncompressed'format.If the
inputEncodingis not provided,keyis expected to be aBuffer,TypedArray, orDataView.Example (uncompressing a key):
const { createECDH, ECDH, } = await import('node:crypto'); const ecdh = createECDH('secp256k1'); ecdh.generateKeys(); const compressedKey = ecdh.getPublicKey('hex', 'compressed'); const uncompressedKey = ECDH.convertKey(compressedKey, 'secp256k1', 'hex', 'hex', 'uncompressed'); // The converted key and the uncompressed public key should be the same console.log(uncompressedKey === ecdh.getPublicKey('hex'));@param inputEncodingThe
encodingof thekeystring.@param outputEncodingThe
encodingof the return value.
class Hash
The
Hashclass is a utility for creating hash digests of data. It can be used in one of two ways:- As a
streamthat is both readable and writable, where data is written to produce a computed hash digest on the readable side, or - Using the
hash.update()andhash.digest()methods to produce the computed hash.
The createHash method is used to create
Hashinstances.Hashobjects are not to be created directly using thenewkeyword.Example: Using
Hashobjects as streams:const { createHash, } = await import('node:crypto'); const hash = createHash('sha256'); hash.on('readable', () => { // Only one element is going to be produced by the // hash stream. const data = hash.read(); if (data) { console.log(data.toString('hex')); // Prints: // 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50 } }); hash.write('some data to hash'); hash.end();Example: Using
Hashand piped streams:import { createReadStream } from 'node:fs'; import { stdout } from 'node:process'; const { createHash } = await import('node:crypto'); const hash = createHash('sha256'); const input = createReadStream('test.js'); input.pipe(hash).setEncoding('hex').pipe(stdout);Example: Using the
hash.update()andhash.digest()methods:const { createHash, } = await import('node:crypto'); const hash = createHash('sha256'); hash.update('some data to hash'); console.log(hash.digest('hex')); // Prints: // 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. - static captureRejections: boolean
Value: boolean
Change the default
captureRejectionsoption on all newEventEmitterobjects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')See how to write a custom
rejection handler. - static defaultMaxListeners: number
By default, a maximum of
10listeners can be registered for any single event. This limit can be changed for individualEventEmitterinstances using theemitter.setMaxListeners(n)method. To change the default for allEventEmitterinstances, theevents.defaultMaxListenersproperty can be used. If this value is not a positive number, aRangeErroris thrown.Take caution when setting the
events.defaultMaxListenersbecause the change affects allEventEmitterinstances, including those created before the change is made. However, callingemitter.setMaxListeners(n)still has precedence overevents.defaultMaxListeners.This is not a hard limit. The
EventEmitterinstance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter, theemitter.getMaxListeners()andemitter.setMaxListeners()methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });The
--trace-warningscommand-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')and will have the additionalemitter,type, andcountproperties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsnameproperty is set to'MaxListenersExceededWarning'. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'events. Listeners installed using this symbol are called before the regular'error'listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'event is emitted. Therefore, the process will still crash if no regular'error'listener is installed. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: T | ComposeFnParam | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
- copy(
Creates a new
Hashobject that contains a deep copy of the internal state of the currentHashobject.The optional
optionsargument controls stream behavior. For XOF hash functions such as'shake256', theoutputLengthoption can be used to specify the desired output length in bytes.An error is thrown when an attempt is made to copy the
Hashobject after itshash.digest()method has been called.// Calculate a rolling hash. const { createHash, } = await import('node:crypto'); const hash = createHash('sha256'); hash.update('one'); console.log(hash.copy().digest('hex')); hash.update('two'); console.log(hash.copy().digest('hex')); hash.update('three'); console.log(hash.copy().digest('hex')); // Etc.@param optionsstream.transformoptions The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event Calculates the digest of all of the data passed to be hashed (using the
hash.update()method). Ifencodingis provided a string will be returned; otherwise aBufferis returned.The
Hashobject can not be used again afterhash.digest()method has been called. Multiple calls will cause an error to be thrown.): string;Calculates the digest of all of the data passed to be hashed (using the
hash.update()method). Ifencodingis provided a string will be returned; otherwise aBufferis returned.The
Hashobject can not be used again afterhash.digest()method has been called. Multiple calls will cause an error to be thrown.@param encodingThe
encodingof the return value.- drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. Updates the hash content with the given
data, the encoding of which is given ininputEncoding. Ifencodingis not provided, and thedatais a string, an encoding of'utf8'is enforced. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.This can be called many times with new data as it is streamed.
data: string,Updates the hash content with the given
data, the encoding of which is given ininputEncoding. Ifencodingis not provided, and thedatais a string, an encoding of'utf8'is enforced. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.This can be called many times with new data as it is streamed.
@param inputEncodingThe
encodingof thedatastring.- wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue. - ): Disposable;
Listens once to the
abortevent on the providedsignal.Listening to the
abortevent on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignals in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagationdoes not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }@returnsDisposable that removes the
abortlistener. - src: string | Object | Stream | ArrayBuffer | Blob | Promise<any> | Iterable<any, any, any> | AsyncIterable<any, any, any> | AsyncGeneratorFunction
A utility method for creating duplex streams.
Streamconverts writable stream into writableDuplexand readable stream toDuplex.Blobconverts into readableDuplex.stringconverts into readableDuplex.ArrayBufferconverts into readableDuplex.AsyncIterableconverts into a readableDuplex. Cannot yieldnull.AsyncGeneratorFunctionconverts into a readable/writable transformDuplex. Must take a sourceAsyncIterableas first parameter. Cannot yieldnull.AsyncFunctionconverts into a writableDuplex. Must return eithernullorundefinedObject ({ writable, readable })convertsreadableandwritableintoStreamand then combines them intoDuplexwhere theDuplexwill write to thewritableand read from thereadable.Promiseconverts into readableDuplex. Valuenullis ignored.
- options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
A utility method for creating a
Duplexfrom a webReadableStreamandWritableStream. - name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.For
EventEmitters this behaves exactly the same as calling.listenerson the emitter.For
EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] } - ): number;
Returns the currently set max amount of listeners.
For
EventEmitters this behaves exactly the same as calling.getMaxListenerson the emitter.For
EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 } - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemittereventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemitter - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled! - n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);@param nA non-negative number. The maximum number of listeners per
EventTargetevent.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
nis set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
ReadableStreamandWritableStreamfrom aDuplex.
- As a
class KeyObject
Node.js uses a
KeyObjectclass to represent a symmetric or asymmetric key, and each kind of key exposes different functions. The createSecretKey, createPublicKey and createPrivateKey methods are used to createKeyObjectinstances.KeyObjectobjects are not to be created directly using thenewkeyword.Most applications should consider using the new
KeyObjectAPI instead of passing keys as strings orBuffers due to improved security features.KeyObjectinstances can be passed to other threads viapostMessage(). The receiver obtains a clonedKeyObject, and theKeyObjectdoes not need to be listed in thetransferListargument.- asymmetricKeyDetails?: AsymmetricKeyDetails
This property exists only on asymmetric keys. Depending on the type of the key, this object contains information about the key. None of the information obtained through this property can be used to uniquely identify a key or to compromise the security of the key.
For RSA-PSS keys, if the key material contains a
RSASSA-PSS-paramssequence, thehashAlgorithm,mgf1HashAlgorithm, andsaltLengthproperties will be set.Other key details might be exposed via this API using additional attributes.
- asymmetricKeyType?: KeyType
For asymmetric keys, this property represents the type of the key. See the supported asymmetric key types.
This property is
undefinedfor unrecognizedKeyObjecttypes and symmetric keys. - symmetricKeySize?: number
For secret keys, this property represents the size of the key in bytes. This property is
undefinedfor asymmetric keys. - type: KeyObjectType
Depending on the type of this
KeyObject, this property is either'secret'for secret (symmetric) keys,'public'for public (asymmetric) keys or'private'for private (asymmetric) keys. - ): boolean;
Returns
trueorfalsedepending on whether the keys have exactly the same type, value, and parameters. This method is not constant time.@param otherKeyObjectA
KeyObjectwith which to comparekeyObject. - ): string | NonSharedBuffer;
For symmetric keys, the following encoding options can be used:
For public keys, the following encoding options can be used:
For private keys, the following encoding options can be used:
The result type depends on the selected encoding format, when PEM the result is a string, when DER it will be a buffer containing the data encoded as DER, when JWK it will be an object.
When JWK encoding format was selected, all other encoding options are ignored.
PKCS#1, SEC1, and PKCS#8 type keys can be encrypted by using a combination of the
cipherandformatoptions. The PKCS#8typecan be used with anyformatto encrypt any key algorithm (RSA, EC, or DH) by specifying acipher. PKCS#1 and SEC1 can only be encrypted by specifying acipherwhen the PEMformatis used. For maximum compatibility, use PKCS#8 for encrypted private keys. Since PKCS#8 defines its own encryption mechanism, PEM-level encryption is not supported when encrypting a PKCS#8 key. See RFC 5208 for PKCS#8 encryption and RFC 1421 for PKCS#1 and SEC1 encryption.): NonSharedBuffer;For symmetric keys, the following encoding options can be used:
For public keys, the following encoding options can be used:
For private keys, the following encoding options can be used:
The result type depends on the selected encoding format, when PEM the result is a string, when DER it will be a buffer containing the data encoded as DER, when JWK it will be an object.
When JWK encoding format was selected, all other encoding options are ignored.
PKCS#1, SEC1, and PKCS#8 type keys can be encrypted by using a combination of the
cipherandformatoptions. The PKCS#8typecan be used with anyformatto encrypt any key algorithm (RSA, EC, or DH) by specifying acipher. PKCS#1 and SEC1 can only be encrypted by specifying acipherwhen the PEMformatis used. For maximum compatibility, use PKCS#8 for encrypted private keys. Since PKCS#8 defines its own encryption mechanism, PEM-level encryption is not supported when encrypting a PKCS#8 key. See RFC 5208 for PKCS#8 encryption and RFC 1421 for PKCS#1 and SEC1 encryption.For symmetric keys, the following encoding options can be used:
For public keys, the following encoding options can be used:
For private keys, the following encoding options can be used:
The result type depends on the selected encoding format, when PEM the result is a string, when DER it will be a buffer containing the data encoded as DER, when JWK it will be an object.
When JWK encoding format was selected, all other encoding options are ignored.
PKCS#1, SEC1, and PKCS#8 type keys can be encrypted by using a combination of the
cipherandformatoptions. The PKCS#8typecan be used with anyformatto encrypt any key algorithm (RSA, EC, or DH) by specifying acipher. PKCS#1 and SEC1 can only be encrypted by specifying acipherwhen the PEMformatis used. For maximum compatibility, use PKCS#8 for encrypted private keys. Since PKCS#8 defines its own encryption mechanism, PEM-level encryption is not supported when encrypting a PKCS#8 key. See RFC 5208 for PKCS#8 encryption and RFC 1421 for PKCS#1 and SEC1 encryption. - extractable: boolean,
Converts a
KeyObjectinstance to aCryptoKey. Example: Converting a
CryptoKeyinstance to aKeyObject:const { KeyObject } = await import('node:crypto'); const { subtle } = globalThis.crypto; const key = await subtle.generateKey({ name: 'HMAC', hash: 'SHA-256', length: 256, }, true, ['sign', 'verify']); const keyObject = KeyObject.from(key); console.log(keyObject.symmetricKeySize); // Prints: 32 (symmetric key size in bytes)
class Sign
The
Signclass is a utility for generating signatures. It can be used in one of two ways:- As a writable
stream, where data to be signed is written and thesign.sign()method is used to generate and return the signature, or - Using the
sign.update()andsign.sign()methods to produce the signature.
The createSign method is used to create
Signinstances. The argument is the string name of the hash function to use.Signobjects are not to be created directly using thenewkeyword.Example: Using
SignandVerifyobjects as streams:const { generateKeyPairSync, createSign, createVerify, } = await import('node:crypto'); const { privateKey, publicKey } = generateKeyPairSync('ec', { namedCurve: 'sect239k1', }); const sign = createSign('SHA256'); sign.write('some data to sign'); sign.end(); const signature = sign.sign(privateKey, 'hex'); const verify = createVerify('SHA256'); verify.write('some data to sign'); verify.end(); console.log(verify.verify(publicKey, signature, 'hex')); // Prints: trueExample: Using the
sign.update()andverify.update()methods:const { generateKeyPairSync, createSign, createVerify, } = await import('node:crypto'); const { privateKey, publicKey } = generateKeyPairSync('rsa', { modulusLength: 2048, }); const sign = createSign('SHA256'); sign.update('some data to sign'); sign.end(); const signature = sign.sign(privateKey); const verify = createVerify('SHA256'); verify.update('some data to sign'); verify.end(); console.log(verify.verify(publicKey, signature)); // Prints: true- readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. - static captureRejections: boolean
Value: boolean
Change the default
captureRejectionsoption on all newEventEmitterobjects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')See how to write a custom
rejection handler. - static defaultMaxListeners: number
By default, a maximum of
10listeners can be registered for any single event. This limit can be changed for individualEventEmitterinstances using theemitter.setMaxListeners(n)method. To change the default for allEventEmitterinstances, theevents.defaultMaxListenersproperty can be used. If this value is not a positive number, aRangeErroris thrown.Take caution when setting the
events.defaultMaxListenersbecause the change affects allEventEmitterinstances, including those created before the change is made. However, callingemitter.setMaxListeners(n)still has precedence overevents.defaultMaxListeners.This is not a hard limit. The
EventEmitterinstance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter, theemitter.getMaxListeners()andemitter.setMaxListeners()methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });The
--trace-warningscommand-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')and will have the additionalemitter,type, andcountproperties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsnameproperty is set to'MaxListenersExceededWarning'. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'events. Listeners installed using this symbol are called before the regular'error'listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'event is emitted. Therefore, the process will still crash if no regular'error'listener is installed. Calls
writable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the writable stream has ended and subsequent calls towrite()orend()will result in anERR_STREAM_DESTROYEDerror. This is a destructive and immediate way to destroy a stream. Previous calls towrite()may not have drained, and may trigger anERR_STREAM_DESTROYEDerror. Useend()instead of destroy if data should flush before close, or wait for the'drain'event before destroying the stream.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
writable._destroy().@param errorOptional, an error to emit with
'error'event. - emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.- eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. - encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - sign(): NonSharedBuffer;
Calculates the signature on all the data passed through using either
sign.update()orsign.write().If
privateKeyis not aKeyObject, this function behaves as ifprivateKeyhad been passed to createPrivateKey. If it is an object, the following additional properties can be passed:If
outputEncodingis provided a string is returned; otherwise aBufferis returned.The
Signobject can not be again used aftersign.sign()method has been called. Multiple calls tosign.sign()will result in an error being thrown.sign(): string;Calculates the signature on all the data passed through using either
sign.update()orsign.write().If
privateKeyis not aKeyObject, this function behaves as ifprivateKeyhad been passed to createPrivateKey. If it is an object, the following additional properties can be passed:If
outputEncodingis provided a string is returned; otherwise aBufferis returned.The
Signobject can not be again used aftersign.sign()method has been called. Multiple calls tosign.sign()will result in an error being thrown. The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- ): this;
Updates the
Signcontent with the givendata, the encoding of which is given ininputEncoding. Ifencodingis not provided, and thedatais a string, an encoding of'utf8'is enforced. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.This can be called many times with new data as it is streamed.
data: string,): this;Updates the
Signcontent with the givendata, the encoding of which is given ininputEncoding. Ifencodingis not provided, and thedatais a string, an encoding of'utf8'is enforced. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.This can be called many times with new data as it is streamed.
@param inputEncodingThe
encodingof thedatastring. - chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue. - ): Disposable;
Listens once to the
abortevent on the providedsignal.Listening to the
abortevent on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignals in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagationdoes not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }@returnsDisposable that removes the
abortlistener. - options?: Pick<WritableOptions<Writable>, 'signal' | 'decodeStrings' | 'highWaterMark' | 'objectMode'>
A utility method for creating a
Writablefrom a webWritableStream. - name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.For
EventEmitters this behaves exactly the same as calling.listenerson the emitter.For
EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] } - ): number;
Returns the currently set max amount of listeners.
For
EventEmitters this behaves exactly the same as calling.getMaxListenerson the emitter.For
EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 } - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemittereventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemitter - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled! - n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);@param nA non-negative number. The maximum number of listeners per
EventTargetevent.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
nis set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
WritableStreamfrom aWritable.
- As a writable
class Verify
The
Verifyclass is a utility for verifying signatures. It can be used in one of two ways:- As a writable
streamwhere written data is used to validate against the supplied signature, or - Using the
verify.update()andverify.verify()methods to verify the signature.
The createVerify method is used to create
Verifyinstances.Verifyobjects are not to be created directly using thenewkeyword.See
Signfor examples.- readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. - static captureRejections: boolean
Value: boolean
Change the default
captureRejectionsoption on all newEventEmitterobjects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')See how to write a custom
rejection handler. - static defaultMaxListeners: number
By default, a maximum of
10listeners can be registered for any single event. This limit can be changed for individualEventEmitterinstances using theemitter.setMaxListeners(n)method. To change the default for allEventEmitterinstances, theevents.defaultMaxListenersproperty can be used. If this value is not a positive number, aRangeErroris thrown.Take caution when setting the
events.defaultMaxListenersbecause the change affects allEventEmitterinstances, including those created before the change is made. However, callingemitter.setMaxListeners(n)still has precedence overevents.defaultMaxListeners.This is not a hard limit. The
EventEmitterinstance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter, theemitter.getMaxListeners()andemitter.setMaxListeners()methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });The
--trace-warningscommand-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')and will have the additionalemitter,type, andcountproperties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsnameproperty is set to'MaxListenersExceededWarning'. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'events. Listeners installed using this symbol are called before the regular'error'listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'event is emitted. Therefore, the process will still crash if no regular'error'listener is installed. Calls
writable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the writable stream has ended and subsequent calls towrite()orend()will result in anERR_STREAM_DESTROYEDerror. This is a destructive and immediate way to destroy a stream. Previous calls towrite()may not have drained, and may trigger anERR_STREAM_DESTROYEDerror. Useend()instead of destroy if data should flush before close, or wait for the'drain'event before destroying the stream.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
writable._destroy().@param errorOptional, an error to emit with
'error'event. - emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.- eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. - encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().Updates the
Verifycontent with the givendata, the encoding of which is given ininputEncoding. IfinputEncodingis not provided, and thedatais a string, an encoding of'utf8'is enforced. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.This can be called many times with new data as it is streamed.
data: string,Updates the
Verifycontent with the givendata, the encoding of which is given ininputEncoding. IfinputEncodingis not provided, and thedatais a string, an encoding of'utf8'is enforced. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.This can be called many times with new data as it is streamed.
@param inputEncodingThe
encodingof thedatastring.- signature: ArrayBufferView): boolean;
Verifies the provided data using the given
objectandsignature.If
objectis not aKeyObject, this function behaves as ifobjecthad been passed to createPublicKey. If it is an object, the following additional properties can be passed:The
signatureargument is the previously calculated signature for the data, in thesignatureEncoding. If asignatureEncodingis specified, thesignatureis expected to be a string; otherwisesignatureis expected to be aBuffer,TypedArray, orDataView.The
verifyobject can not be used again afterverify.verify()has been called. Multiple calls toverify.verify()will result in an error being thrown.Because public keys can be derived from private keys, a private key may be passed instead of a public key.
signature: string,): boolean;Verifies the provided data using the given
objectandsignature.If
objectis not aKeyObject, this function behaves as ifobjecthad been passed to createPublicKey. If it is an object, the following additional properties can be passed:The
signatureargument is the previously calculated signature for the data, in thesignatureEncoding. If asignatureEncodingis specified, thesignatureis expected to be a string; otherwisesignatureis expected to be aBuffer,TypedArray, orDataView.The
verifyobject can not be used again afterverify.verify()has been called. Multiple calls toverify.verify()will result in an error being thrown.Because public keys can be derived from private keys, a private key may be passed instead of a public key.
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue. - ): Disposable;
Listens once to the
abortevent on the providedsignal.Listening to the
abortevent on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation(). Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignals in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagationdoes not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }@returnsDisposable that removes the
abortlistener. - options?: Pick<WritableOptions<Writable>, 'signal' | 'decodeStrings' | 'highWaterMark' | 'objectMode'>
A utility method for creating a
Writablefrom a webWritableStream. - name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.For
EventEmitters this behaves exactly the same as calling.listenerson the emitter.For
EventTargets this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] } - ): number;
Returns the currently set max amount of listeners.
For
EventEmitters this behaves exactly the same as calling.getMaxListenerson the emitter.For
EventTargets this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 } - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemittereventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable hereReturns an
AsyncIteratorthat iterateseventNameevents. It will throw if theEventEmitteremits'error'. It removes all listeners when exiting the loop. Thevaluereturned by each iteration is an array composed of the emitted event arguments.An
AbortSignalcan be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());Use the
closeoption to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'@returnsAn
AsyncIteratorthat iterateseventNameevents emitted by theemitter - emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promisethat is fulfilled when theEventEmitteremits the given event or that is rejected if theEventEmitteremits'error'while waiting. ThePromisewill resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'event semantics and does not listen to the'error'event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }The special handling of the
'error'event is only used whenevents.once()is used to wait for another event. Ifevents.once()is used to wait for the 'error'event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boomAn
AbortSignalcan be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled! - n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);@param nA non-negative number. The maximum number of listeners per
EventTargetevent.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
nis set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
WritableStreamfrom aWritable.
- As a writable
class X509Certificate
Encapsulates an X509 certificate and provides read-only access to its information.
const { X509Certificate } = await import('node:crypto'); const x509 = new X509Certificate('{... pem encoded cert ...}'); console.log(x509.subject);- readonly fingerprint: string
The SHA-1 fingerprint of this certificate.
Because SHA-1 is cryptographically broken and because the security of SHA-1 is significantly worse than that of algorithms that are commonly used to sign certificates, consider using
x509.fingerprint256instead. - readonly fingerprint512: string
The SHA-512 fingerprint of this certificate.
Because computing the SHA-256 fingerprint is usually faster and because it is only half the size of the SHA-512 fingerprint,
x509.fingerprint256may be a better choice. While SHA-512 presumably provides a higher level of security in general, the security of SHA-256 matches that of most algorithms that are commonly used to sign certificates. - readonly infoAccess: undefined | string
A textual representation of the certificate's authority information access extension.
This is a line feed separated list of access descriptions. Each line begins with the access method and the kind of the access location, followed by a colon and the value associated with the access location.
After the prefix denoting the access method and the kind of the access location, the remainder of each line might be enclosed in quotes to indicate that the value is a JSON string literal. For backward compatibility, Node.js only uses JSON string literals within this property when necessary to avoid ambiguity. Third-party code should be prepared to handle both possible entry formats.
- readonly issuerCertificate: undefined | X509Certificate
The issuer certificate or
undefinedif the issuer certificate is not available. - readonly serialNumber: string
The serial number of this certificate.
Serial numbers are assigned by certificate authorities and do not uniquely identify certificates. Consider using
x509.fingerprint256as a unique identifier instead. - readonly signatureAlgorithm: undefined | string
The algorithm used to sign the certificate or
undefinedif the signature algorithm is unknown by OpenSSL. - readonly subjectAltName: undefined | string
The subject alternative name specified for this certificate.
This is a comma-separated list of subject alternative names. Each entry begins with a string identifying the kind of the subject alternative name followed by a colon and the value associated with the entry.
Earlier versions of Node.js incorrectly assumed that it is safe to split this property at the two-character sequence
', '(see CVE-2021-44532). However, both malicious and legitimate certificates can contain subject alternative names that include this sequence when represented as a string.After the prefix denoting the type of the entry, the remainder of each entry might be enclosed in quotes to indicate that the value is a JSON string literal. For backward compatibility, Node.js only uses JSON string literals within this property when necessary to avoid ambiguity. Third-party code should be prepared to handle both possible entry formats.
- readonly validFromDate: Date
The date/time from which this certificate is valid, encapsulated in a
Dateobject. - readonly validToDate: Date
The date/time until which this certificate is valid, encapsulated in a
Dateobject. - email: string,): undefined | string;
Checks whether the certificate matches the given email address.
If the
'subject'option is undefined or set to'default', the certificate subject is only considered if the subject alternative name extension either does not exist or does not contain any email addresses.If the
'subject'option is set to'always'and if the subject alternative name extension either does not exist or does not contain a matching email address, the certificate subject is considered.If the
'subject'option is set to'never', the certificate subject is never considered, even if the certificate contains no subject alternative names.@returnsReturns
emailif the certificate matches,undefinedif it does not. - name: string,): undefined | string;
Checks whether the certificate matches the given host name.
If the certificate matches the given host name, the matching subject name is returned. The returned name might be an exact match (e.g.,
foo.example.com) or it might contain wildcards (e.g.,*.example.com). Because host name comparisons are case-insensitive, the returned subject name might also differ from the givennamein capitalization.If the
'subject'option is undefined or set to'default', the certificate subject is only considered if the subject alternative name extension either does not exist or does not contain any DNS names. This behavior is consistent with RFC 2818 ("HTTP Over TLS").If the
'subject'option is set to'always'and if the subject alternative name extension either does not exist or does not contain a matching DNS name, the certificate subject is considered.If the
'subject'option is set to'never', the certificate subject is never considered, even if the certificate contains no subject alternative names.@returnsReturns a subject name that matches
name, orundefinedif no subject name matchesname. - ip: string): undefined | string;
Checks whether the certificate matches the given IP address (IPv4 or IPv6).
Only RFC 5280
iPAddresssubject alternative names are considered, and they must match the givenipaddress exactly. Other subject alternative names as well as the subject field of the certificate are ignored.@returnsReturns
ipif the certificate matches,undefinedif it does not. - ): boolean;
Checks whether this certificate was potentially issued by the given
otherCertby comparing the certificate metadata.This is useful for pruning a list of possible issuer certificates which have been selected using a more rudimentary filtering routine, i.e. just based on subject and issuer names.
Finally, to verify that this certificate's signature was produced by a private key corresponding to
otherCert's public key usex509.verify(publicKey)withotherCert's public key represented as aKeyObjectlike soif (!x509.verify(otherCert.publicKey)) { throw new Error('otherCert did not issue x509'); } - ): boolean;
Checks whether the public key for this certificate is consistent with the given private key.
@param privateKeyA private key.
There is no standard JSON encoding for X509 certificates. The
toJSON()method returns a string containing the PEM encoded certificate.Returns information about this certificate using the legacy
certificate objectencoding.Returns the PEM-encoded certificate.
The
DiffieHellmanGroupclass takes a well-known modp group as its argument. It works the same asDiffieHellman, except that it does not allow changing its keys after creation. In other words, it does not implementsetPublicKey()orsetPrivateKey()methods.const { createDiffieHellmanGroup } = await import('node:crypto'); const dh = createDiffieHellmanGroup('modp1');The name (e.g.
'modp1') is taken from RFC 2412 (modp1 and 2) and RFC 3526:perl -ne 'print "$1\n" if /"(modp\d+)"/' src/node_crypto_groups.hmodp1 # 768 bits modp2 # 1024 bits modp5 # 1536 bits modp14 # 2048 bits modp15 # etc. modp16 modp17 modp18A convenient alias for
crypto.webcrypto.subtle.An implementation of the Web Crypto API standard.
See the Web Crypto API documentation for details.
- ): void;
Provides an asynchronous Argon2 implementation. Argon2 is a password-based key derivation function that is designed to be expensive computationally and memory-wise in order to make brute-force attacks unrewarding.
The
nonceshould be as unique as possible. It is recommended that a nonce is random and at least 16 bytes long. See NIST SP 800-132 for details.When passing strings for
message,nonce,secretorassociatedData, please consider caveats when using strings as inputs to cryptographic APIs.The
callbackfunction is called with two arguments:errandderivedKey.erris an exception object when key derivation fails, otherwiseerrisnull.derivedKeyis passed to the callback as aBuffer.An exception is thrown when any of the input arguments specify invalid values or types.
const { argon2, randomBytes } = await import('node:crypto'); const parameters = { message: 'password', nonce: randomBytes(16), parallelism: 4, tagLength: 64, memory: 65536, passes: 3, }; argon2('argon2id', parameters, (err, derivedKey) => { if (err) throw err; console.log(derivedKey.toString('hex')); // 'af91dad...9520f15' });@param algorithmVariant of Argon2, one of
"argon2d","argon2i"or"argon2id". - ): NonSharedBuffer;
Provides a synchronous [Argon2][] implementation. Argon2 is a password-based key derivation function that is designed to be expensive computationally and memory-wise in order to make brute-force attacks unrewarding.
The
nonceshould be as unique as possible. It is recommended that a nonce is random and at least 16 bytes long. See NIST SP 800-132 for details.When passing strings for
message,nonce,secretorassociatedData, please consider caveats when using strings as inputs to cryptographic APIs.An exception is thrown when key derivation fails, otherwise the derived key is returned as a
Buffer.An exception is thrown when any of the input arguments specify invalid values or types.
const { argon2Sync, randomBytes } = await import('node:crypto'); const parameters = { message: 'password', nonce: randomBytes(16), parallelism: 4, tagLength: 64, memory: 65536, passes: 3, }; const derivedKey = argon2Sync('argon2id', parameters); console.log(derivedKey.toString('hex')); // 'af91dad...9520f15' - ): void;
Checks the primality of the
candidate.): void;Checks the primality of the
candidate. - ): boolean;
Checks the primality of the
candidate.@param candidateA possible prime encoded as a sequence of big endian octets of arbitrary length.
@returnstrueif the candidate is a prime with an error probability less than0.25 ** options.checks. Creates and returns a
Cipherobject, with the givenalgorithm,keyand initialization vector (iv).The
optionsargument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm') is used. In that case, theauthTagLengthoption is required and specifies the length of the authentication tag in bytes, seeCCM mode. In GCM mode, theauthTagLengthoption is not required but can be used to set the length of the authentication tag that will be returned bygetAuthTag()and defaults to 16 bytes. Forchacha20-poly1305, theauthTagLengthoption defaults to 16 bytes.The
algorithmis dependent on OpenSSL, examples are'aes192', etc. On recent OpenSSL releases,openssl list -cipher-algorithmswill display the available cipher algorithms.The
keyis the raw key used by thealgorithmandivis an initialization vector. Both arguments must be'utf8'encoded strings,Buffers,TypedArray, orDataViews. Thekeymay optionally be aKeyObjectof typesecret. If the cipher does not need an initialization vector,ivmay benull.When passing strings for
keyoriv, please considercaveats when using strings as inputs to cryptographic APIs.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transformoptionsCreates and returns a
Cipherobject, with the givenalgorithm,keyand initialization vector (iv).The
optionsargument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm') is used. In that case, theauthTagLengthoption is required and specifies the length of the authentication tag in bytes, seeCCM mode. In GCM mode, theauthTagLengthoption is not required but can be used to set the length of the authentication tag that will be returned bygetAuthTag()and defaults to 16 bytes. Forchacha20-poly1305, theauthTagLengthoption defaults to 16 bytes.The
algorithmis dependent on OpenSSL, examples are'aes192', etc. On recent OpenSSL releases,openssl list -cipher-algorithmswill display the available cipher algorithms.The
keyis the raw key used by thealgorithmandivis an initialization vector. Both arguments must be'utf8'encoded strings,Buffers,TypedArray, orDataViews. Thekeymay optionally be aKeyObjectof typesecret. If the cipher does not need an initialization vector,ivmay benull.When passing strings for
keyoriv, please considercaveats when using strings as inputs to cryptographic APIs.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transformoptionsCreates and returns a
Cipherobject, with the givenalgorithm,keyand initialization vector (iv).The
optionsargument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm') is used. In that case, theauthTagLengthoption is required and specifies the length of the authentication tag in bytes, seeCCM mode. In GCM mode, theauthTagLengthoption is not required but can be used to set the length of the authentication tag that will be returned bygetAuthTag()and defaults to 16 bytes. Forchacha20-poly1305, theauthTagLengthoption defaults to 16 bytes.The
algorithmis dependent on OpenSSL, examples are'aes192', etc. On recent OpenSSL releases,openssl list -cipher-algorithmswill display the available cipher algorithms.The
keyis the raw key used by thealgorithmandivis an initialization vector. Both arguments must be'utf8'encoded strings,Buffers,TypedArray, orDataViews. Thekeymay optionally be aKeyObjectof typesecret. If the cipher does not need an initialization vector,ivmay benull.When passing strings for
keyoriv, please considercaveats when using strings as inputs to cryptographic APIs.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transformoptionsalgorithm: 'chacha20-poly1305',Creates and returns a
Cipherobject, with the givenalgorithm,keyand initialization vector (iv).The
optionsargument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm') is used. In that case, theauthTagLengthoption is required and specifies the length of the authentication tag in bytes, seeCCM mode. In GCM mode, theauthTagLengthoption is not required but can be used to set the length of the authentication tag that will be returned bygetAuthTag()and defaults to 16 bytes. Forchacha20-poly1305, theauthTagLengthoption defaults to 16 bytes.The
algorithmis dependent on OpenSSL, examples are'aes192', etc. On recent OpenSSL releases,openssl list -cipher-algorithmswill display the available cipher algorithms.The
keyis the raw key used by thealgorithmandivis an initialization vector. Both arguments must be'utf8'encoded strings,Buffers,TypedArray, orDataViews. Thekeymay optionally be aKeyObjectof typesecret. If the cipher does not need an initialization vector,ivmay benull.When passing strings for
keyoriv, please considercaveats when using strings as inputs to cryptographic APIs.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transformoptionsalgorithm: string,Creates and returns a
Cipherobject, with the givenalgorithm,keyand initialization vector (iv).The
optionsargument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm') is used. In that case, theauthTagLengthoption is required and specifies the length of the authentication tag in bytes, seeCCM mode. In GCM mode, theauthTagLengthoption is not required but can be used to set the length of the authentication tag that will be returned bygetAuthTag()and defaults to 16 bytes. Forchacha20-poly1305, theauthTagLengthoption defaults to 16 bytes.The
algorithmis dependent on OpenSSL, examples are'aes192', etc. On recent OpenSSL releases,openssl list -cipher-algorithmswill display the available cipher algorithms.The
keyis the raw key used by thealgorithmandivis an initialization vector. Both arguments must be'utf8'encoded strings,Buffers,TypedArray, orDataViews. Thekeymay optionally be aKeyObjectof typesecret. If the cipher does not need an initialization vector,ivmay benull.When passing strings for
keyoriv, please considercaveats when using strings as inputs to cryptographic APIs.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transformoptionsCreates and returns a
Decipherivobject that uses the givenalgorithm,keyand initialization vector (iv).The
optionsargument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm') is used. In that case, theauthTagLengthoption is required and specifies the length of the authentication tag in bytes, seeCCM mode. In GCM mode, theauthTagLengthoption is not required but can be used to restrict accepted authentication tags to those with the specified length. Forchacha20-poly1305, theauthTagLengthoption defaults to 16 bytes.The
algorithmis dependent on OpenSSL, examples are'aes192', etc. On recent OpenSSL releases,openssl list -cipher-algorithmswill display the available cipher algorithms.The
keyis the raw key used by thealgorithmandivis an initialization vector. Both arguments must be'utf8'encoded strings,Buffers,TypedArray, orDataViews. Thekeymay optionally be aKeyObjectof typesecret. If the cipher does not need an initialization vector,ivmay benull.When passing strings for
keyoriv, please considercaveats when using strings as inputs to cryptographic APIs.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transformoptionsCreates and returns a
Decipherivobject that uses the givenalgorithm,keyand initialization vector (iv).The
optionsargument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm') is used. In that case, theauthTagLengthoption is required and specifies the length of the authentication tag in bytes, seeCCM mode. In GCM mode, theauthTagLengthoption is not required but can be used to restrict accepted authentication tags to those with the specified length. Forchacha20-poly1305, theauthTagLengthoption defaults to 16 bytes.The
algorithmis dependent on OpenSSL, examples are'aes192', etc. On recent OpenSSL releases,openssl list -cipher-algorithmswill display the available cipher algorithms.The
keyis the raw key used by thealgorithmandivis an initialization vector. Both arguments must be'utf8'encoded strings,Buffers,TypedArray, orDataViews. Thekeymay optionally be aKeyObjectof typesecret. If the cipher does not need an initialization vector,ivmay benull.When passing strings for
keyoriv, please considercaveats when using strings as inputs to cryptographic APIs.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transformoptionsCreates and returns a
Decipherivobject that uses the givenalgorithm,keyand initialization vector (iv).The
optionsargument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm') is used. In that case, theauthTagLengthoption is required and specifies the length of the authentication tag in bytes, seeCCM mode. In GCM mode, theauthTagLengthoption is not required but can be used to restrict accepted authentication tags to those with the specified length. Forchacha20-poly1305, theauthTagLengthoption defaults to 16 bytes.The
algorithmis dependent on OpenSSL, examples are'aes192', etc. On recent OpenSSL releases,openssl list -cipher-algorithmswill display the available cipher algorithms.The
keyis the raw key used by thealgorithmandivis an initialization vector. Both arguments must be'utf8'encoded strings,Buffers,TypedArray, orDataViews. Thekeymay optionally be aKeyObjectof typesecret. If the cipher does not need an initialization vector,ivmay benull.When passing strings for
keyoriv, please considercaveats when using strings as inputs to cryptographic APIs.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transformoptionsalgorithm: 'chacha20-poly1305',Creates and returns a
Decipherivobject that uses the givenalgorithm,keyand initialization vector (iv).The
optionsargument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm') is used. In that case, theauthTagLengthoption is required and specifies the length of the authentication tag in bytes, seeCCM mode. In GCM mode, theauthTagLengthoption is not required but can be used to restrict accepted authentication tags to those with the specified length. Forchacha20-poly1305, theauthTagLengthoption defaults to 16 bytes.The
algorithmis dependent on OpenSSL, examples are'aes192', etc. On recent OpenSSL releases,openssl list -cipher-algorithmswill display the available cipher algorithms.The
keyis the raw key used by thealgorithmandivis an initialization vector. Both arguments must be'utf8'encoded strings,Buffers,TypedArray, orDataViews. Thekeymay optionally be aKeyObjectof typesecret. If the cipher does not need an initialization vector,ivmay benull.When passing strings for
keyoriv, please considercaveats when using strings as inputs to cryptographic APIs.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transformoptionsalgorithm: string,Creates and returns a
Decipherivobject that uses the givenalgorithm,keyand initialization vector (iv).The
optionsargument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm') is used. In that case, theauthTagLengthoption is required and specifies the length of the authentication tag in bytes, seeCCM mode. In GCM mode, theauthTagLengthoption is not required but can be used to restrict accepted authentication tags to those with the specified length. Forchacha20-poly1305, theauthTagLengthoption defaults to 16 bytes.The
algorithmis dependent on OpenSSL, examples are'aes192', etc. On recent OpenSSL releases,openssl list -cipher-algorithmswill display the available cipher algorithms.The
keyis the raw key used by thealgorithmandivis an initialization vector. Both arguments must be'utf8'encoded strings,Buffers,TypedArray, orDataViews. Thekeymay optionally be aKeyObjectof typesecret. If the cipher does not need an initialization vector,ivmay benull.When passing strings for
keyoriv, please considercaveats when using strings as inputs to cryptographic APIs.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transformoptions- primeLength: number,generator?: number
Creates a
DiffieHellmankey exchange object using the suppliedprimeand an optional specificgenerator.The
generatorargument can be a number, string, orBuffer. Ifgeneratoris not specified, the value2is used.If
primeEncodingis specified,primeis expected to be a string; otherwise aBuffer,TypedArray, orDataViewis expected.If
generatorEncodingis specified,generatoris expected to be a string; otherwise a number,Buffer,TypedArray, orDataViewis expected.Creates a
DiffieHellmankey exchange object using the suppliedprimeand an optional specificgenerator.The
generatorargument can be a number, string, orBuffer. Ifgeneratoris not specified, the value2is used.If
primeEncodingis specified,primeis expected to be a string; otherwise aBuffer,TypedArray, orDataViewis expected.If
generatorEncodingis specified,generatoris expected to be a string; otherwise a number,Buffer,TypedArray, orDataViewis expected.generator: string,Creates a
DiffieHellmankey exchange object using the suppliedprimeand an optional specificgenerator.The
generatorargument can be a number, string, orBuffer. Ifgeneratoris not specified, the value2is used.If
primeEncodingis specified,primeis expected to be a string; otherwise aBuffer,TypedArray, orDataViewis expected.If
generatorEncodingis specified,generatoris expected to be a string; otherwise a number,Buffer,TypedArray, orDataViewis expected.@param generatorEncodingThe
encodingof thegeneratorstring.prime: string,Creates a
DiffieHellmankey exchange object using the suppliedprimeand an optional specificgenerator.The
generatorargument can be a number, string, orBuffer. Ifgeneratoris not specified, the value2is used.If
primeEncodingis specified,primeis expected to be a string; otherwise aBuffer,TypedArray, orDataViewis expected.If
generatorEncodingis specified,generatoris expected to be a string; otherwise a number,Buffer,TypedArray, orDataViewis expected.@param primeEncodingThe
encodingof theprimestring.prime: string,generator: string,Creates a
DiffieHellmankey exchange object using the suppliedprimeand an optional specificgenerator.The
generatorargument can be a number, string, orBuffer. Ifgeneratoris not specified, the value2is used.If
primeEncodingis specified,primeis expected to be a string; otherwise aBuffer,TypedArray, orDataViewis expected.If
generatorEncodingis specified,generatoris expected to be a string; otherwise a number,Buffer,TypedArray, orDataViewis expected.@param primeEncodingThe
encodingof theprimestring.@param generatorEncodingThe
encodingof thegeneratorstring. - curveName: string
Creates an Elliptic Curve Diffie-Hellman (
ECDH) key exchange object using a predefined curve specified by thecurveNamestring. Use getCurves to obtain a list of available curve names. On recent OpenSSL releases,openssl ecparam -list_curveswill also display the name and description of each available elliptic curve. - algorithm: string,
Creates and returns a
Hashobject that can be used to generate hash digests using the givenalgorithm. Optionaloptionsargument controls stream behavior. For XOF hash functions such as'shake256', theoutputLengthoption can be used to specify the desired output length in bytes.The
algorithmis dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are'sha256','sha512', etc. On recent releases of OpenSSL,openssl list -digest-algorithmswill display the available digest algorithms.Example: generating the sha256 sum of a file
import { createReadStream, } from 'node:fs'; import { argv } from 'node:process'; const { createHash, } = await import('node:crypto'); const filename = argv[2]; const hash = createHash('sha256'); const input = createReadStream(filename); input.on('readable', () => { // Only one element is going to be produced by the // hash stream. const data = input.read(); if (data) hash.update(data); else { console.log(`${hash.digest('hex')} ${filename}`); } });@param optionsstream.transformoptions - algorithm: string,): Hmac;
Creates and returns an
Hmacobject that uses the givenalgorithmandkey. Optionaloptionsargument controls stream behavior.The
algorithmis dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are'sha256','sha512', etc. On recent releases of OpenSSL,openssl list -digest-algorithmswill display the available digest algorithms.The
keyis the HMAC key used to generate the cryptographic HMAC hash. If it is aKeyObject, its type must besecret. If it is a string, please considercaveats when using strings as inputs to cryptographic APIs. If it was obtained from a cryptographically secure source of entropy, such as randomBytes or generateKey, its length should not exceed the block size ofalgorithm(e.g., 512 bits for SHA-256).Example: generating the sha256 HMAC of a file
import { createReadStream, } from 'node:fs'; import { argv } from 'node:process'; const { createHmac, } = await import('node:crypto'); const filename = argv[2]; const hmac = createHmac('sha256', 'a secret'); const input = createReadStream(filename); input.on('readable', () => { // Only one element is going to be produced by the // hash stream. const data = input.read(); if (data) hmac.update(data); else { console.log(`${hmac.digest('hex')} ${filename}`); } });@param optionsstream.transformoptions Creates and returns a new key object containing a private key. If
keyis a string orBuffer,formatis assumed to be'pem'; otherwise,keymust be an object with the properties described above.If the private key is encrypted, a
passphrasemust be specified. The length of the passphrase is limited to 1024 bytes.Creates and returns a new key object containing a public key. If
keyis a string orBuffer,formatis assumed to be'pem'; ifkeyis aKeyObjectwith type'private', the public key is derived from the given private key; otherwise,keymust be an object with the properties described above.If the format is
'pem', the'key'may also be an X.509 certificate.Because public keys can be derived from private keys, a private key may be passed instead of a public key. In that case, this function behaves as if createPrivateKey had been called, except that the type of the returned
KeyObjectwill be'public'and that the private key cannot be extracted from the returnedKeyObject. Similarly, if aKeyObjectwith type'private'is given, a newKeyObjectwith type'public'will be returned and it will be impossible to extract the private key from the returned object.- key: ArrayBufferView
Creates and returns a new key object containing a secret key for symmetric encryption or
Hmac.key: string,encoding: BufferEncodingCreates and returns a new key object containing a secret key for symmetric encryption or
Hmac.@param encodingThe string encoding when
keyis a string. - algorithm: string,
Creates and returns a
Signobject that uses the givenalgorithm. Use getHashes to obtain the names of the available digest algorithms. Optionaloptionsargument controls thestream.Writablebehavior.In some cases, a
Signinstance can be created using the name of a signature algorithm, such as'RSA-SHA256', instead of a digest algorithm. This will use the corresponding digest algorithm. This does not work for all signature algorithms, such as'ecdsa-with-SHA256', so it is best to always use digest algorithm names.@param optionsstream.Writableoptions - algorithm: string,
Creates and returns a
Verifyobject that uses the given algorithm. Use getHashes to obtain an array of names of the available signing algorithms. Optionaloptionsargument controls thestream.Writablebehavior.In some cases, a
Verifyinstance can be created using the name of a signature algorithm, such as'RSA-SHA256', instead of a digest algorithm. This will use the corresponding digest algorithm. This does not work for all signature algorithms, such as'ecdsa-with-SHA256', so it is best to always use digest algorithm names.@param optionsstream.Writableoptions - ): NonSharedBuffer;
Key decapsulation using a KEM algorithm with a private key.
Supported key types and their KEM algorithms are:
'rsa'RSA Secret Value Encapsulation'ec'DHKEM(P-256, HKDF-SHA256), DHKEM(P-384, HKDF-SHA256), DHKEM(P-521, HKDF-SHA256)'x25519'DHKEM(X25519, HKDF-SHA256)'x448'DHKEM(X448, HKDF-SHA512)'ml-kem-512'ML-KEM'ml-kem-768'ML-KEM'ml-kem-1024'ML-KEM
If
keyis not a KeyObject, this function behaves as ifkeyhad been passed tocrypto.createPrivateKey().If the
callbackfunction is provided this function uses libuv's threadpool.): void;Key decapsulation using a KEM algorithm with a private key.
Supported key types and their KEM algorithms are:
'rsa'RSA Secret Value Encapsulation'ec'DHKEM(P-256, HKDF-SHA256), DHKEM(P-384, HKDF-SHA256), DHKEM(P-521, HKDF-SHA256)'x25519'DHKEM(X25519, HKDF-SHA256)'x448'DHKEM(X448, HKDF-SHA512)'ml-kem-512'ML-KEM'ml-kem-768'ML-KEM'ml-kem-1024'ML-KEM
If
keyis not a KeyObject, this function behaves as ifkeyhad been passed tocrypto.createPrivateKey().If the
callbackfunction is provided this function uses libuv's threadpool. - ): NonSharedBuffer;
Computes the Diffie-Hellman shared secret based on a
privateKeyand apublicKey. Both keys must have the sameasymmetricKeyTypeand must support either the DH or ECDH operation.If the
callbackfunction is provided this function uses libuv's threadpool.): void;Computes the Diffie-Hellman shared secret based on a
privateKeyand apublicKey. Both keys must have the sameasymmetricKeyTypeand must support either the DH or ECDH operation.If the
callbackfunction is provided this function uses libuv's threadpool. - ): { ciphertext: NonSharedBuffer; sharedKey: NonSharedBuffer };
Key encapsulation using a KEM algorithm with a public key.
Supported key types and their KEM algorithms are:
'rsa'RSA Secret Value Encapsulation'ec'DHKEM(P-256, HKDF-SHA256), DHKEM(P-384, HKDF-SHA256), DHKEM(P-521, HKDF-SHA256)'x25519'DHKEM(X25519, HKDF-SHA256)'x448'DHKEM(X448, HKDF-SHA512)'ml-kem-512'ML-KEM'ml-kem-768'ML-KEM'ml-kem-1024'ML-KEM
If
keyis not a KeyObject, this function behaves as ifkeyhad been passed tocrypto.createPublicKey().If the
callbackfunction is provided this function uses libuv's threadpool.): void;Key encapsulation using a KEM algorithm with a public key.
Supported key types and their KEM algorithms are:
'rsa'RSA Secret Value Encapsulation'ec'DHKEM(P-256, HKDF-SHA256), DHKEM(P-384, HKDF-SHA256), DHKEM(P-521, HKDF-SHA256)'x25519'DHKEM(X25519, HKDF-SHA256)'x448'DHKEM(X448, HKDF-SHA512)'ml-kem-512'ML-KEM'ml-kem-768'ML-KEM'ml-kem-1024'ML-KEM
If
keyis not a KeyObject, this function behaves as ifkeyhad been passed tocrypto.createPublicKey().If the
callbackfunction is provided this function uses libuv's threadpool. - type: 'hmac' | 'aes',options: { length: number },): void;
Asynchronously generates a new random secret key of the given
length. Thetypewill determine which validations will be performed on thelength.const { generateKey, } = await import('node:crypto'); generateKey('hmac', { length: 512 }, (err, key) => { if (err) throw err; console.log(key.export().toString('hex')); // 46e..........620 });The size of a generated HMAC key should not exceed the block size of the underlying hash function. See createHmac for more information.
@param typeThe intended use of the generated secret key. Currently accepted values are
'hmac'and'aes'. - type: 'rsa',): void;
Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
- type: 'rsa',
Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, DH, and ML-DSA are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.When encoding public keys, it is recommended to use
'spki'. When encoding private keys, it is recommended to use'pkcs8'with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });The return value
{ publicKey, privateKey }represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
- type: 'hmac' | 'aes',options: { length: number }
Synchronously generates a new random secret key of the given
length. Thetypewill determine which validations will be performed on thelength.const { generateKeySync, } = await import('node:crypto'); const key = generateKeySync('hmac', { length: 512 }); console.log(key.export().toString('hex')); // e89..........41eThe size of a generated HMAC key should not exceed the block size of the underlying hash function. See createHmac for more information.
@param typeThe intended use of the generated secret key. Currently accepted values are
'hmac'and'aes'. - size: number,): void;
Generates a pseudorandom prime of
sizebits.If
options.safeistrue, the prime will be a safe prime -- that is,(prime - 1) / 2will also be a prime.The
options.addandoptions.remparameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.addandoptions.remare both set, the prime will satisfy the condition thatprime % add = rem. - If only
options.addis set andoptions.safeis nottrue, the prime will satisfy the condition thatprime % add = 1. - If only
options.addis set andoptions.safeis set totrue, the prime will instead satisfy the condition thatprime % add = 3. This is necessary becauseprime % add = 1foroptions.add > 2would contradict the condition enforced byoptions.safe. options.remis ignored ifoptions.addis not given.
Both
options.addandoptions.remmust be encoded as big-endian sequences if given as anArrayBuffer,SharedArrayBuffer,TypedArray,Buffer, orDataView.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigintoption istrue, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
size: number,): void;Generates a pseudorandom prime of
sizebits.If
options.safeistrue, the prime will be a safe prime -- that is,(prime - 1) / 2will also be a prime.The
options.addandoptions.remparameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.addandoptions.remare both set, the prime will satisfy the condition thatprime % add = rem. - If only
options.addis set andoptions.safeis nottrue, the prime will satisfy the condition thatprime % add = 1. - If only
options.addis set andoptions.safeis set totrue, the prime will instead satisfy the condition thatprime % add = 3. This is necessary becauseprime % add = 1foroptions.add > 2would contradict the condition enforced byoptions.safe. options.remis ignored ifoptions.addis not given.
Both
options.addandoptions.remmust be encoded as big-endian sequences if given as anArrayBuffer,SharedArrayBuffer,TypedArray,Buffer, orDataView.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigintoption istrue, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
size: number,): void;Generates a pseudorandom prime of
sizebits.If
options.safeistrue, the prime will be a safe prime -- that is,(prime - 1) / 2will also be a prime.The
options.addandoptions.remparameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.addandoptions.remare both set, the prime will satisfy the condition thatprime % add = rem. - If only
options.addis set andoptions.safeis nottrue, the prime will satisfy the condition thatprime % add = 1. - If only
options.addis set andoptions.safeis set totrue, the prime will instead satisfy the condition thatprime % add = 3. This is necessary becauseprime % add = 1foroptions.add > 2would contradict the condition enforced byoptions.safe. options.remis ignored ifoptions.addis not given.
Both
options.addandoptions.remmust be encoded as big-endian sequences if given as anArrayBuffer,SharedArrayBuffer,TypedArray,Buffer, orDataView.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigintoption istrue, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
size: number,): void;Generates a pseudorandom prime of
sizebits.If
options.safeistrue, the prime will be a safe prime -- that is,(prime - 1) / 2will also be a prime.The
options.addandoptions.remparameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.addandoptions.remare both set, the prime will satisfy the condition thatprime % add = rem. - If only
options.addis set andoptions.safeis nottrue, the prime will satisfy the condition thatprime % add = 1. - If only
options.addis set andoptions.safeis set totrue, the prime will instead satisfy the condition thatprime % add = 3. This is necessary becauseprime % add = 1foroptions.add > 2would contradict the condition enforced byoptions.safe. options.remis ignored ifoptions.addis not given.
Both
options.addandoptions.remmust be encoded as big-endian sequences if given as anArrayBuffer,SharedArrayBuffer,TypedArray,Buffer, orDataView.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigintoption istrue, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
- If
- size: number
Generates a pseudorandom prime of
sizebits.If
options.safeistrue, the prime will be a safe prime -- that is,(prime - 1) / 2will also be a prime.The
options.addandoptions.remparameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.addandoptions.remare both set, the prime will satisfy the condition thatprime % add = rem. - If only
options.addis set andoptions.safeis nottrue, the prime will satisfy the condition thatprime % add = 1. - If only
options.addis set andoptions.safeis set totrue, the prime will instead satisfy the condition thatprime % add = 3. This is necessary becauseprime % add = 1foroptions.add > 2would contradict the condition enforced byoptions.safe. options.remis ignored ifoptions.addis not given.
Both
options.addandoptions.remmust be encoded as big-endian sequences if given as anArrayBuffer,SharedArrayBuffer,TypedArray,Buffer, orDataView.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigintoption istrue, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
size: number,): bigint;Generates a pseudorandom prime of
sizebits.If
options.safeistrue, the prime will be a safe prime -- that is,(prime - 1) / 2will also be a prime.The
options.addandoptions.remparameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.addandoptions.remare both set, the prime will satisfy the condition thatprime % add = rem. - If only
options.addis set andoptions.safeis nottrue, the prime will satisfy the condition thatprime % add = 1. - If only
options.addis set andoptions.safeis set totrue, the prime will instead satisfy the condition thatprime % add = 3. This is necessary becauseprime % add = 1foroptions.add > 2would contradict the condition enforced byoptions.safe. options.remis ignored ifoptions.addis not given.
Both
options.addandoptions.remmust be encoded as big-endian sequences if given as anArrayBuffer,SharedArrayBuffer,TypedArray,Buffer, orDataView.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigintoption istrue, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
size: number,Generates a pseudorandom prime of
sizebits.If
options.safeistrue, the prime will be a safe prime -- that is,(prime - 1) / 2will also be a prime.The
options.addandoptions.remparameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.addandoptions.remare both set, the prime will satisfy the condition thatprime % add = rem. - If only
options.addis set andoptions.safeis nottrue, the prime will satisfy the condition thatprime % add = 1. - If only
options.addis set andoptions.safeis set totrue, the prime will instead satisfy the condition thatprime % add = 3. This is necessary becauseprime % add = 1foroptions.add > 2would contradict the condition enforced byoptions.safe. options.remis ignored ifoptions.addis not given.
Both
options.addandoptions.remmust be encoded as big-endian sequences if given as anArrayBuffer,SharedArrayBuffer,TypedArray,Buffer, orDataView.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigintoption istrue, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
size: number,Generates a pseudorandom prime of
sizebits.If
options.safeistrue, the prime will be a safe prime -- that is,(prime - 1) / 2will also be a prime.The
options.addandoptions.remparameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.addandoptions.remare both set, the prime will satisfy the condition thatprime % add = rem. - If only
options.addis set andoptions.safeis nottrue, the prime will satisfy the condition thatprime % add = 1. - If only
options.addis set andoptions.safeis set totrue, the prime will instead satisfy the condition thatprime % add = 3. This is necessary becauseprime % add = 1foroptions.add > 2would contradict the condition enforced byoptions.safe. options.remis ignored ifoptions.addis not given.
Both
options.addandoptions.remmust be encoded as big-endian sequences if given as anArrayBuffer,SharedArrayBuffer,TypedArray,Buffer, orDataView.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigintoption istrue, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
- If
- nameOrNid: string | number,
Returns information about a given cipher.
Some ciphers accept variable length keys and initialization vectors. By default, the
crypto.getCipherInfo()method will return the default values for these ciphers. To test if a given key length or iv length is acceptable for given cipher, use thekeyLengthandivLengthoptions. If the given values are unacceptable,undefinedwill be returned.@param nameOrNidThe name or nid of the cipher to query.
const { getCiphers, } = await import('node:crypto'); console.log(getCiphers()); // ['aes-128-cbc', 'aes-128-ccm', ...]@returnsAn array with the names of the supported cipher algorithms.
const { getCurves, } = await import('node:crypto'); console.log(getCurves()); // ['Oakley-EC2N-3', 'Oakley-EC2N-4', ...]@returnsAn array with the names of the supported elliptic curves.
- groupName: string
Creates a predefined
DiffieHellmanGroupkey exchange object. The supported groups are listed in the documentation forDiffieHellmanGroup.The returned object mimics the interface of objects created by createDiffieHellman, but will not allow changing the keys (with
diffieHellman.setPublicKey(), for example). The advantage of using this method is that the parties do not have to generate nor exchange a group modulus beforehand, saving both processor and communication time.Example (obtaining a shared secret):
const { getDiffieHellman, } = await import('node:crypto'); const alice = getDiffieHellman('modp14'); const bob = getDiffieHellman('modp14'); alice.generateKeys(); bob.generateKeys(); const aliceSecret = alice.computeSecret(bob.getPublicKey(), null, 'hex'); const bobSecret = bob.computeSecret(alice.getPublicKey(), null, 'hex'); // aliceSecret and bobSecret should be the same console.log(aliceSecret === bobSecret); - @returns
1if and only if a FIPS compliant crypto provider is currently in use,0otherwise. A future semver-major release may change the return type of this API to a {boolean}. const { getHashes, } = await import('node:crypto'); console.log(getHashes()); // ['DSA', 'DSA-SHA', 'DSA-SHA1', ...]@returnsAn array of the names of the supported hash algorithms, such as
'RSA-SHA256'. Hash algorithms are also called "digest" algorithms.- typedArray: T): T;
A convenient alias for webcrypto.getRandomValues. This implementation is not compliant with the Web Crypto spec, to write web-compatible code use webcrypto.getRandomValues instead.
@returnsReturns
typedArray. - algorithm: string,): string;
A utility for creating one-shot hash digests of data. It can be faster than the object-based
crypto.createHash()when hashing a smaller amount of data (<= 5MB) that's readily available. If the data can be big or if it is streamed, it's still recommended to usecrypto.createHash()instead.The
algorithmis dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are'sha256','sha512', etc. On recent releases of OpenSSL,openssl list -digest-algorithmswill display the available digest algorithms.If
optionsis a string, then it specifies theoutputEncoding.Example:
import crypto from 'node:crypto'; import { Buffer } from 'node:buffer'; // Hashing a string and return the result as a hex-encoded string. const string = 'Node.js'; // 10b3493287f831e81a438811a1ffba01f8cec4b7 console.log(crypto.hash('sha1', string)); // Encode a base64-encoded string into a Buffer, hash it and return // the result as a buffer. const base64 = 'Tm9kZS5qcw=='; // <Buffer 10 b3 49 32 87 f8 31 e8 1a 43 88 11 a1 ff ba 01 f8 ce c4 b7> console.log(crypto.hash('sha1', Buffer.from(base64, 'base64'), 'buffer'));@param dataWhen
datais a string, it will be encoded as UTF-8 before being hashed. If a different input encoding is desired for a string input, user could encode the string into aTypedArrayusing eitherTextEncoderorBuffer.from()and passing the encodedTypedArrayinto this API instead.algorithm: string,): NonSharedBuffer;A utility for creating one-shot hash digests of data. It can be faster than the object-based
crypto.createHash()when hashing a smaller amount of data (<= 5MB) that's readily available. If the data can be big or if it is streamed, it's still recommended to usecrypto.createHash()instead.The
algorithmis dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are'sha256','sha512', etc. On recent releases of OpenSSL,openssl list -digest-algorithmswill display the available digest algorithms.If
optionsis a string, then it specifies theoutputEncoding.Example:
import crypto from 'node:crypto'; import { Buffer } from 'node:buffer'; // Hashing a string and return the result as a hex-encoded string. const string = 'Node.js'; // 10b3493287f831e81a438811a1ffba01f8cec4b7 console.log(crypto.hash('sha1', string)); // Encode a base64-encoded string into a Buffer, hash it and return // the result as a buffer. const base64 = 'Tm9kZS5qcw=='; // <Buffer 10 b3 49 32 87 f8 31 e8 1a 43 88 11 a1 ff ba 01 f8 ce c4 b7> console.log(crypto.hash('sha1', Buffer.from(base64, 'base64'), 'buffer'));@param dataWhen
datais a string, it will be encoded as UTF-8 before being hashed. If a different input encoding is desired for a string input, user could encode the string into aTypedArrayusing eitherTextEncoderorBuffer.from()and passing the encodedTypedArrayinto this API instead.algorithm: string,): string | NonSharedBuffer;A utility for creating one-shot hash digests of data. It can be faster than the object-based
crypto.createHash()when hashing a smaller amount of data (<= 5MB) that's readily available. If the data can be big or if it is streamed, it's still recommended to usecrypto.createHash()instead.The
algorithmis dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are'sha256','sha512', etc. On recent releases of OpenSSL,openssl list -digest-algorithmswill display the available digest algorithms.If
optionsis a string, then it specifies theoutputEncoding.Example:
import crypto from 'node:crypto'; import { Buffer } from 'node:buffer'; // Hashing a string and return the result as a hex-encoded string. const string = 'Node.js'; // 10b3493287f831e81a438811a1ffba01f8cec4b7 console.log(crypto.hash('sha1', string)); // Encode a base64-encoded string into a Buffer, hash it and return // the result as a buffer. const base64 = 'Tm9kZS5qcw=='; // <Buffer 10 b3 49 32 87 f8 31 e8 1a 43 88 11 a1 ff ba 01 f8 ce c4 b7> console.log(crypto.hash('sha1', Buffer.from(base64, 'base64'), 'buffer'));@param dataWhen
datais a string, it will be encoded as UTF-8 before being hashed. If a different input encoding is desired for a string input, user could encode the string into aTypedArrayusing eitherTextEncoderorBuffer.from()and passing the encodedTypedArrayinto this API instead. - digest: string,keylen: number,): void;
HKDF is a simple key derivation function defined in RFC 5869. The given
ikm,saltandinfoare used with thedigestto derive a key ofkeylenbytes.The supplied
callbackfunction is called with two arguments:errandderivedKey. If an errors occurs while deriving the key,errwill be set; otherwiseerrwill benull. The successfully generatedderivedKeywill be passed to the callback as an ArrayBuffer. An error will be thrown if any of the input arguments specify invalid values or types.import { Buffer } from 'node:buffer'; const { hkdf, } = await import('node:crypto'); hkdf('sha512', 'key', 'salt', 'info', 64, (err, derivedKey) => { if (err) throw err; console.log(Buffer.from(derivedKey).toString('hex')); // '24156e2...5391653' });@param digestThe digest algorithm to use.
@param saltThe salt value. Must be provided but can be zero-length.
@param infoAdditional info value. Must be provided but can be zero-length, and cannot be more than 1024 bytes.
@param keylenThe length of the key to generate. Must be greater than 0. The maximum allowable value is
255times the number of bytes produced by the selected digest function (e.g.sha512generates 64-byte hashes, making the maximum HKDF output 16320 bytes). - digest: string,keylen: number
Provides a synchronous HKDF key derivation function as defined in RFC 5869. The given
ikm,saltandinfoare used with thedigestto derive a key ofkeylenbytes.The successfully generated
derivedKeywill be returned as an ArrayBuffer.An error will be thrown if any of the input arguments specify invalid values or types, or if the derived key cannot be generated.
import { Buffer } from 'node:buffer'; const { hkdfSync, } = await import('node:crypto'); const derivedKey = hkdfSync('sha512', 'key', 'salt', 'info', 64); console.log(Buffer.from(derivedKey).toString('hex')); // '24156e2...5391653'@param digestThe digest algorithm to use.
@param ikmThe input keying material. Must be provided but can be zero-length.
@param saltThe salt value. Must be provided but can be zero-length.
@param infoAdditional info value. Must be provided but can be zero-length, and cannot be more than 1024 bytes.
@param keylenThe length of the key to generate. Must be greater than 0. The maximum allowable value is
255times the number of bytes produced by the selected digest function (e.g.sha512generates 64-byte hashes, making the maximum HKDF output 16320 bytes). - iterations: number,keylen: number,digest: string,): void;
Provides an asynchronous Password-Based Key Derivation Function 2 (PBKDF2) implementation. A selected HMAC digest algorithm specified by
digestis applied to derive a key of the requested byte length (keylen) from thepassword,saltanditerations.The supplied
callbackfunction is called with two arguments:errandderivedKey. If an error occurs while deriving the key,errwill be set; otherwiseerrwill benull. By default, the successfully generatedderivedKeywill be passed to the callback as aBuffer. An error will be thrown if any of the input arguments specify invalid values or types.The
iterationsargument must be a number set as high as possible. The higher the number of iterations, the more secure the derived key will be, but will take a longer amount of time to complete.The
saltshould be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.When passing strings for
passwordorsalt, please considercaveats when using strings as inputs to cryptographic APIs.const { pbkdf2, } = await import('node:crypto'); pbkdf2('secret', 'salt', 100000, 64, 'sha512', (err, derivedKey) => { if (err) throw err; console.log(derivedKey.toString('hex')); // '3745e48...08d59ae' });An array of supported digest functions can be retrieved using getHashes.
This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the
UV_THREADPOOL_SIZEdocumentation for more information. - iterations: number,keylen: number,digest: string): NonSharedBuffer;
Provides a synchronous Password-Based Key Derivation Function 2 (PBKDF2) implementation. A selected HMAC digest algorithm specified by
digestis applied to derive a key of the requested byte length (keylen) from thepassword,saltanditerations.If an error occurs an
Errorwill be thrown, otherwise the derived key will be returned as aBuffer.The
iterationsargument must be a number set as high as possible. The higher the number of iterations, the more secure the derived key will be, but will take a longer amount of time to complete.The
saltshould be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.When passing strings for
passwordorsalt, please considercaveats when using strings as inputs to cryptographic APIs.const { pbkdf2Sync, } = await import('node:crypto'); const key = pbkdf2Sync('secret', 'salt', 100000, 64, 'sha512'); console.log(key.toString('hex')); // '3745e48...08d59ae'An array of supported digest functions can be retrieved using getHashes.
- buffer: string | ArrayBufferView<ArrayBufferLike>): NonSharedBuffer;
Decrypts
bufferwithprivateKey.bufferwas previously encrypted using the corresponding public key, for example using publicEncrypt.If
privateKeyis not aKeyObject, this function behaves as ifprivateKeyhad been passed to createPrivateKey. If it is an object, thepaddingproperty can be passed. Otherwise, this function usesRSA_PKCS1_OAEP_PADDING. - buffer: string | ArrayBufferView<ArrayBufferLike>): NonSharedBuffer;
Encrypts
bufferwithprivateKey. The returned data can be decrypted using the corresponding public key, for example using publicDecrypt.If
privateKeyis not aKeyObject, this function behaves as ifprivateKeyhad been passed to createPrivateKey. If it is an object, thepaddingproperty can be passed. Otherwise, this function usesRSA_PKCS1_PADDING. - size: number,): void;
- buffer: string | ArrayBufferView<ArrayBufferLike>): NonSharedBuffer;
Decrypts
bufferwithkey.bufferwas previously encrypted using the corresponding private key, for example using privateEncrypt.If
keyis not aKeyObject, this function behaves as ifkeyhad been passed to createPublicKey. If it is an object, thepaddingproperty can be passed. Otherwise, this function usesRSA_PKCS1_PADDING.Because RSA public keys can be derived from private keys, a private key may be passed instead of a public key.
- buffer: string | ArrayBufferView<ArrayBufferLike>): NonSharedBuffer;
Encrypts the content of
bufferwithkeyand returns a newBufferwith encrypted content. The returned data can be decrypted using the corresponding private key, for example using privateDecrypt.If
keyis not aKeyObject, this function behaves as ifkeyhad been passed to createPublicKey. If it is an object, thepaddingproperty can be passed. Otherwise, this function usesRSA_PKCS1_OAEP_PADDING.Because RSA public keys can be derived from private keys, a private key may be passed instead of a public key.
- size: number): NonSharedBuffer;
Generates cryptographically strong pseudorandom data. The
sizeargument is a number indicating the number of bytes to generate.If a
callbackfunction is provided, the bytes are generated asynchronously and thecallbackfunction is invoked with two arguments:errandbuf. If an error occurs,errwill be anErrorobject; otherwise it isnull. Thebufargument is aBuffercontaining the generated bytes.// Asynchronous const { randomBytes, } = await import('node:crypto'); randomBytes(256, (err, buf) => { if (err) throw err; console.log(`${buf.length} bytes of random data: ${buf.toString('hex')}`); });If the
callbackfunction is not provided, the random bytes are generated synchronously and returned as aBuffer. An error will be thrown if there is a problem generating the bytes.// Synchronous const { randomBytes, } = await import('node:crypto'); const buf = randomBytes(256); console.log( `${buf.length} bytes of random data: ${buf.toString('hex')}`);The
crypto.randomBytes()method will not complete until there is sufficient entropy available. This should normally never take longer than a few milliseconds. The only time when generating the random bytes may conceivably block for a longer period of time is right after boot, when the whole system is still low on entropy.This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the
UV_THREADPOOL_SIZEdocumentation for more information.The asynchronous version of
crypto.randomBytes()is carried out in a single threadpool request. To minimize threadpool task length variation, partition largerandomBytesrequests when doing so as part of fulfilling a client request.@param sizeThe number of bytes to generate. The
sizemust not be larger than2**31 - 1.@returnsif the
callbackfunction is not provided.size: number,): void;Generates cryptographically strong pseudorandom data. The
sizeargument is a number indicating the number of bytes to generate.If a
callbackfunction is provided, the bytes are generated asynchronously and thecallbackfunction is invoked with two arguments:errandbuf. If an error occurs,errwill be anErrorobject; otherwise it isnull. Thebufargument is aBuffercontaining the generated bytes.// Asynchronous const { randomBytes, } = await import('node:crypto'); randomBytes(256, (err, buf) => { if (err) throw err; console.log(`${buf.length} bytes of random data: ${buf.toString('hex')}`); });If the
callbackfunction is not provided, the random bytes are generated synchronously and returned as aBuffer. An error will be thrown if there is a problem generating the bytes.// Synchronous const { randomBytes, } = await import('node:crypto'); const buf = randomBytes(256); console.log( `${buf.length} bytes of random data: ${buf.toString('hex')}`);The
crypto.randomBytes()method will not complete until there is sufficient entropy available. This should normally never take longer than a few milliseconds. The only time when generating the random bytes may conceivably block for a longer period of time is right after boot, when the whole system is still low on entropy.This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the
UV_THREADPOOL_SIZEdocumentation for more information.The asynchronous version of
crypto.randomBytes()is carried out in a single threadpool request. To minimize threadpool task length variation, partition largerandomBytesrequests when doing so as part of fulfilling a client request.@param sizeThe number of bytes to generate. The
sizemust not be larger than2**31 - 1.@returnsif the
callbackfunction is not provided. - buffer: T,): void;
This function is similar to randomBytes but requires the first argument to be a
Bufferthat will be filled. It also requires that a callback is passed in.If the
callbackfunction is not provided, an error will be thrown.import { Buffer } from 'node:buffer'; const { randomFill } = await import('node:crypto'); const buf = Buffer.alloc(10); randomFill(buf, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); }); randomFill(buf, 5, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); }); // The above is equivalent to the following: randomFill(buf, 5, 5, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); });Any
ArrayBuffer,TypedArray, orDataViewinstance may be passed asbuffer.While this includes instances of
Float32ArrayandFloat64Array, this function should not be used to generate random floating-point numbers. The result may contain+Infinity,-Infinity, andNaN, and even if the array contains finite numbers only, they are not drawn from a uniform random distribution and have no meaningful lower or upper bounds.import { Buffer } from 'node:buffer'; const { randomFill } = await import('node:crypto'); const a = new Uint32Array(10); randomFill(a, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex')); }); const b = new DataView(new ArrayBuffer(10)); randomFill(b, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex')); }); const c = new ArrayBuffer(10); randomFill(c, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf).toString('hex')); });This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the
UV_THREADPOOL_SIZEdocumentation for more information.The asynchronous version of
crypto.randomFill()is carried out in a single threadpool request. To minimize threadpool task length variation, partition largerandomFillrequests when doing so as part of fulfilling a client request.@param bufferMust be supplied. The size of the provided
buffermust not be larger than2**31 - 1.@param callbackfunction(err, buf) {}.buffer: T,offset: number,): void;This function is similar to randomBytes but requires the first argument to be a
Bufferthat will be filled. It also requires that a callback is passed in.If the
callbackfunction is not provided, an error will be thrown.import { Buffer } from 'node:buffer'; const { randomFill } = await import('node:crypto'); const buf = Buffer.alloc(10); randomFill(buf, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); }); randomFill(buf, 5, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); }); // The above is equivalent to the following: randomFill(buf, 5, 5, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); });Any
ArrayBuffer,TypedArray, orDataViewinstance may be passed asbuffer.While this includes instances of
Float32ArrayandFloat64Array, this function should not be used to generate random floating-point numbers. The result may contain+Infinity,-Infinity, andNaN, and even if the array contains finite numbers only, they are not drawn from a uniform random distribution and have no meaningful lower or upper bounds.import { Buffer } from 'node:buffer'; const { randomFill } = await import('node:crypto'); const a = new Uint32Array(10); randomFill(a, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex')); }); const b = new DataView(new ArrayBuffer(10)); randomFill(b, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex')); }); const c = new ArrayBuffer(10); randomFill(c, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf).toString('hex')); });This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the
UV_THREADPOOL_SIZEdocumentation for more information.The asynchronous version of
crypto.randomFill()is carried out in a single threadpool request. To minimize threadpool task length variation, partition largerandomFillrequests when doing so as part of fulfilling a client request.@param bufferMust be supplied. The size of the provided
buffermust not be larger than2**31 - 1.@param callbackfunction(err, buf) {}.buffer: T,offset: number,size: number,): void;This function is similar to randomBytes but requires the first argument to be a
Bufferthat will be filled. It also requires that a callback is passed in.If the
callbackfunction is not provided, an error will be thrown.import { Buffer } from 'node:buffer'; const { randomFill } = await import('node:crypto'); const buf = Buffer.alloc(10); randomFill(buf, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); }); randomFill(buf, 5, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); }); // The above is equivalent to the following: randomFill(buf, 5, 5, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); });Any
ArrayBuffer,TypedArray, orDataViewinstance may be passed asbuffer.While this includes instances of
Float32ArrayandFloat64Array, this function should not be used to generate random floating-point numbers. The result may contain+Infinity,-Infinity, andNaN, and even if the array contains finite numbers only, they are not drawn from a uniform random distribution and have no meaningful lower or upper bounds.import { Buffer } from 'node:buffer'; const { randomFill } = await import('node:crypto'); const a = new Uint32Array(10); randomFill(a, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex')); }); const b = new DataView(new ArrayBuffer(10)); randomFill(b, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex')); }); const c = new ArrayBuffer(10); randomFill(c, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf).toString('hex')); });This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the
UV_THREADPOOL_SIZEdocumentation for more information.The asynchronous version of
crypto.randomFill()is carried out in a single threadpool request. To minimize threadpool task length variation, partition largerandomFillrequests when doing so as part of fulfilling a client request.@param bufferMust be supplied. The size of the provided
buffermust not be larger than2**31 - 1.@param callbackfunction(err, buf) {}. - buffer: T,offset?: number,size?: number): T;
Synchronous version of randomFill.
import { Buffer } from 'node:buffer'; const { randomFillSync } = await import('node:crypto'); const buf = Buffer.alloc(10); console.log(randomFillSync(buf).toString('hex')); randomFillSync(buf, 5); console.log(buf.toString('hex')); // The above is equivalent to the following: randomFillSync(buf, 5, 5); console.log(buf.toString('hex'));Any
ArrayBuffer,TypedArrayorDataViewinstance may be passed asbuffer.import { Buffer } from 'node:buffer'; const { randomFillSync } = await import('node:crypto'); const a = new Uint32Array(10); console.log(Buffer.from(randomFillSync(a).buffer, a.byteOffset, a.byteLength).toString('hex')); const b = new DataView(new ArrayBuffer(10)); console.log(Buffer.from(randomFillSync(b).buffer, b.byteOffset, b.byteLength).toString('hex')); const c = new ArrayBuffer(10); console.log(Buffer.from(randomFillSync(c)).toString('hex'));@param bufferMust be supplied. The size of the provided
buffermust not be larger than2**31 - 1.@returnsThe object passed as
bufferargument. - max: number): number;
Return a random integer
nsuch thatmin <= n < max. This implementation avoids modulo bias.The range (
max - min) must be less than 2**48.minandmaxmust be safe integers.If the
callbackfunction is not provided, the random integer is generated synchronously.// Asynchronous const { randomInt, } = await import('node:crypto'); randomInt(3, (err, n) => { if (err) throw err; console.log(`Random number chosen from (0, 1, 2): ${n}`); });// Synchronous const { randomInt, } = await import('node:crypto'); const n = randomInt(3); console.log(`Random number chosen from (0, 1, 2): ${n}`);// With `min` argument const { randomInt, } = await import('node:crypto'); const n = randomInt(1, 7); console.log(`The dice rolled: ${n}`);@param maxEnd of random range (exclusive).
min: number,max: number): number;Return a random integer
nsuch thatmin <= n < max. This implementation avoids modulo bias.The range (
max - min) must be less than 2**48.minandmaxmust be safe integers.If the
callbackfunction is not provided, the random integer is generated synchronously.// Asynchronous const { randomInt, } = await import('node:crypto'); randomInt(3, (err, n) => { if (err) throw err; console.log(`Random number chosen from (0, 1, 2): ${n}`); });// Synchronous const { randomInt, } = await import('node:crypto'); const n = randomInt(3); console.log(`Random number chosen from (0, 1, 2): ${n}`);// With `min` argument const { randomInt, } = await import('node:crypto'); const n = randomInt(1, 7); console.log(`The dice rolled: ${n}`);@param minStart of random range (inclusive).
@param maxEnd of random range (exclusive).
max: number,): void;Return a random integer
nsuch thatmin <= n < max. This implementation avoids modulo bias.The range (
max - min) must be less than 2**48.minandmaxmust be safe integers.If the
callbackfunction is not provided, the random integer is generated synchronously.// Asynchronous const { randomInt, } = await import('node:crypto'); randomInt(3, (err, n) => { if (err) throw err; console.log(`Random number chosen from (0, 1, 2): ${n}`); });// Synchronous const { randomInt, } = await import('node:crypto'); const n = randomInt(3); console.log(`Random number chosen from (0, 1, 2): ${n}`);// With `min` argument const { randomInt, } = await import('node:crypto'); const n = randomInt(1, 7); console.log(`The dice rolled: ${n}`);@param maxEnd of random range (exclusive).
@param callbackfunction(err, n) {}.min: number,max: number,): void;Return a random integer
nsuch thatmin <= n < max. This implementation avoids modulo bias.The range (
max - min) must be less than 2**48.minandmaxmust be safe integers.If the
callbackfunction is not provided, the random integer is generated synchronously.// Asynchronous const { randomInt, } = await import('node:crypto'); randomInt(3, (err, n) => { if (err) throw err; console.log(`Random number chosen from (0, 1, 2): ${n}`); });// Synchronous const { randomInt, } = await import('node:crypto'); const n = randomInt(3); console.log(`Random number chosen from (0, 1, 2): ${n}`);// With `min` argument const { randomInt, } = await import('node:crypto'); const n = randomInt(1, 7); console.log(`The dice rolled: ${n}`);@param minStart of random range (inclusive).
@param maxEnd of random range (exclusive).
@param callbackfunction(err, n) {}. - ): `${string}-${string}-${string}-${string}-${string}`;
Generates a random RFC 4122 version 4 UUID. The UUID is generated using a cryptographic pseudorandom number generator.
- keylen: number,): void;
Provides an asynchronous scrypt implementation. Scrypt is a password-based key derivation function that is designed to be expensive computationally and memory-wise in order to make brute-force attacks unrewarding.
The
saltshould be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.When passing strings for
passwordorsalt, please considercaveats when using strings as inputs to cryptographic APIs.The
callbackfunction is called with two arguments:errandderivedKey.erris an exception object when key derivation fails, otherwiseerrisnull.derivedKeyis passed to the callback as aBuffer.An exception is thrown when any of the input arguments specify invalid values or types.
const { scrypt, } = await import('node:crypto'); // Using the factory defaults. scrypt('password', 'salt', 64, (err, derivedKey) => { if (err) throw err; console.log(derivedKey.toString('hex')); // '3745e48...08d59ae' }); // Using a custom N parameter. Must be a power of two. scrypt('password', 'salt', 64, { N: 1024 }, (err, derivedKey) => { if (err) throw err; console.log(derivedKey.toString('hex')); // '3745e48...aa39b34' });keylen: number,): void;Provides an asynchronous scrypt implementation. Scrypt is a password-based key derivation function that is designed to be expensive computationally and memory-wise in order to make brute-force attacks unrewarding.
The
saltshould be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.When passing strings for
passwordorsalt, please considercaveats when using strings as inputs to cryptographic APIs.The
callbackfunction is called with two arguments:errandderivedKey.erris an exception object when key derivation fails, otherwiseerrisnull.derivedKeyis passed to the callback as aBuffer.An exception is thrown when any of the input arguments specify invalid values or types.
const { scrypt, } = await import('node:crypto'); // Using the factory defaults. scrypt('password', 'salt', 64, (err, derivedKey) => { if (err) throw err; console.log(derivedKey.toString('hex')); // '3745e48...08d59ae' }); // Using a custom N parameter. Must be a power of two. scrypt('password', 'salt', 64, { N: 1024 }, (err, derivedKey) => { if (err) throw err; console.log(derivedKey.toString('hex')); // '3745e48...aa39b34' }); - keylen: number,): NonSharedBuffer;
Provides a synchronous scrypt implementation. Scrypt is a password-based key derivation function that is designed to be expensive computationally and memory-wise in order to make brute-force attacks unrewarding.
The
saltshould be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.When passing strings for
passwordorsalt, please considercaveats when using strings as inputs to cryptographic APIs.An exception is thrown when key derivation fails, otherwise the derived key is returned as a
Buffer.An exception is thrown when any of the input arguments specify invalid values or types.
const { scryptSync, } = await import('node:crypto'); // Using the factory defaults. const key1 = scryptSync('password', 'salt', 64); console.log(key1.toString('hex')); // '3745e48...08d59ae' // Using a custom N parameter. Must be a power of two. const key2 = scryptSync('password', 'salt', 64, { N: 1024 }); console.log(key2.toString('hex')); // '3745e48...aa39b34' - engine: string,flags?: number): void;
Load and set the
enginefor some or all OpenSSL functions (selected by flags).enginecould be either an id or a path to the engine's shared library.The optional
flagsargument usesENGINE_METHOD_ALLby default. Theflagsis a bit field taking one of or a mix of the following flags (defined incrypto.constants):crypto.constants.ENGINE_METHOD_RSAcrypto.constants.ENGINE_METHOD_DSAcrypto.constants.ENGINE_METHOD_DHcrypto.constants.ENGINE_METHOD_RANDcrypto.constants.ENGINE_METHOD_ECcrypto.constants.ENGINE_METHOD_CIPHERScrypto.constants.ENGINE_METHOD_DIGESTScrypto.constants.ENGINE_METHOD_PKEY_METHScrypto.constants.ENGINE_METHOD_PKEY_ASN1_METHScrypto.constants.ENGINE_METHOD_ALLcrypto.constants.ENGINE_METHOD_NONE
- bool: boolean): void;
Enables the FIPS compliant crypto provider in a FIPS-enabled Node.js build. Throws an error if FIPS mode is not available.
@param booltrueto enable FIPS mode. - algorithm: undefined | null | string,data: ArrayBufferView,): NonSharedBuffer;
Calculates and returns the signature for
datausing the given private key and algorithm. Ifalgorithmisnullorundefined, then the algorithm is dependent upon the key type.algorithmis required to benullorundefinedfor Ed25519, Ed448, and ML-DSA.If
keyis not aKeyObject, this function behaves as ifkeyhad been passed to createPrivateKey. If it is an object, the following additional properties can be passed:If the
callbackfunction is provided this function uses libuv's threadpool.algorithm: undefined | null | string,data: ArrayBufferView,): void;Calculates and returns the signature for
datausing the given private key and algorithm. Ifalgorithmisnullorundefined, then the algorithm is dependent upon the key type.algorithmis required to benullorundefinedfor Ed25519, Ed448, and ML-DSA.If
keyis not aKeyObject, this function behaves as ifkeyhad been passed to createPrivateKey. If it is an object, the following additional properties can be passed:If the
callbackfunction is provided this function uses libuv's threadpool. - a: ArrayBufferView,b: ArrayBufferView): boolean;
This function compares the underlying bytes that represent the given
ArrayBuffer,TypedArray, orDataViewinstances using a constant-time algorithm.This function does not leak timing information that would allow an attacker to guess one of the values. This is suitable for comparing HMAC digests or secret values like authentication cookies or capability urls.
aandbmust both beBuffers,TypedArrays, orDataViews, and they must have the same byte length. An error is thrown ifaandbhave different byte lengths.If at least one of
aandbis aTypedArraywith more than one byte per entry, such asUint16Array, the result will be computed using the platform byte order.When both of the inputs are
Float32Arrays orFloat64Arrays, this function might return unexpected results due to IEEE 754 encoding of floating-point numbers. In particular, neitherx === ynorObject.is(x, y)implies that the byte representations of two floating-point numbersxandyare equal.Use of
crypto.timingSafeEqualdoes not guarantee that the surrounding code is timing-safe. Care should be taken to ensure that the surrounding code does not introduce timing vulnerabilities. - algorithm: undefined | null | string,data: ArrayBufferView,signature: ArrayBufferView): boolean;
Verifies the given signature for
datausing the given key and algorithm. Ifalgorithmisnullorundefined, then the algorithm is dependent upon the key type.algorithmis required to benullorundefinedfor Ed25519, Ed448, and ML-DSA.If
keyis not aKeyObject, this function behaves as ifkeyhad been passed to createPublicKey. If it is an object, the following additional properties can be passed:The
signatureargument is the previously calculated signature for thedata.Because public keys can be derived from private keys, a private key or a public key may be passed for
key.If the
callbackfunction is provided this function uses libuv's threadpool.algorithm: undefined | null | string,data: ArrayBufferView,signature: ArrayBufferView,): void;Verifies the given signature for
datausing the given key and algorithm. Ifalgorithmisnullorundefined, then the algorithm is dependent upon the key type.algorithmis required to benullorundefinedfor Ed25519, Ed448, and ML-DSA.If
keyis not aKeyObject, this function behaves as ifkeyhad been passed to createPublicKey. If it is an object, the following additional properties can be passed:The
signatureargument is the previously calculated signature for thedata.Because public keys can be derived from private keys, a private key or a public key may be passed for
key.If the
callbackfunction is provided this function uses libuv's threadpool.
Type definitions
- type: 'rsa',): void;
Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'dsa',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ec',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ed448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x25519',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'x448',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
type: 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s',): void;Generates a new asymmetric key pair of the given
type. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncodingorprivateKeyEncodingwas specified, this function behaves as ifkeyObject.export()had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject.It is recommended to encode public keys as
'spki'and private keys as'pkcs8'with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });On completion,
callbackwill be called witherrset toundefinedandpublicKey/privateKeyrepresenting the generated key pair.If this method is invoked as its
util.promisify()ed version, it returns aPromisefor anObjectwithpublicKeyandprivateKeyproperties.@param typeThe asymmetric key type to generate. See the supported asymmetric key types.
namespace generateKeyPair
namespace webcrypto
An implementation of the Web Crypto API standard.
See the Web Crypto API documentation for details.
interface AeadParams
interface AesCbcParams
interface AesCtrParams
interface AesDerivedKeyParams
interface AesKeyAlgorithm
interface AesKeyGenParams
interface Argon2Params
interface ContextParams
interface Crypto
Importing the
webcryptoobject (import { webcrypto } from 'node:crypto') gives an instance of theCryptoclass.Cryptois a singleton that provides access to the remainder of the crypto API.- getRandomValues<T extends Uint8Array<ArrayBufferLike> | Uint8ClampedArray<ArrayBufferLike> | Uint16Array<ArrayBufferLike> | Uint32Array<ArrayBufferLike> | Int8Array<ArrayBufferLike> | Int16Array<ArrayBufferLike> | Int32Array<ArrayBufferLike> | BigUint64Array<ArrayBufferLike> | BigInt64Array<ArrayBufferLike>>(typedArray: T): T;
Generates cryptographically strong random values. The given
typedArrayis filled with random values, and a reference totypedArrayis returned.The given
typedArraymust be an integer-based instance of NodeJS.TypedArray, i.e.Float32ArrayandFloat64Arrayare not accepted.An error will be thrown if the given
typedArrayis larger than 65,536 bytes. Generates a random RFC 4122 version 4 UUID. The UUID is generated using a cryptographic pseudorandom number generator.
interface CryptoKey
- readonly algorithm: KeyAlgorithm
An object detailing the algorithm for which the key can be used along with additional algorithm-specific parameters.
- readonly extractable: boolean
When
true, the CryptoKey can be extracted using eithersubtleCrypto.exportKey()orsubtleCrypto.wrapKey(). - readonly usages: KeyUsage[]
An array of strings identifying the operations for which the key may be used.
The possible usages are:
'encrypt'- The key may be used to encrypt data.'decrypt'- The key may be used to decrypt data.'sign'- The key may be used to generate digital signatures.'verify'- The key may be used to verify digital signatures.'deriveKey'- The key may be used to derive a new key.'deriveBits'- The key may be used to derive bits.'wrapKey'- The key may be used to wrap another key.'unwrapKey'- The key may be used to unwrap another key.
Valid key usages depend on the key algorithm (identified by
cryptokey.algorithm.name).
interface CryptoKeyPair
The
CryptoKeyPairis a simple dictionary object withpublicKeyandprivateKeyproperties, representing an asymmetric key pair.interface CShakeParams
interface EcdhKeyDeriveParams
interface EcdsaParams
interface EcKeyAlgorithm
interface EcKeyGenParams
interface EcKeyImportParams
interface EncapsulatedBits
interface EncapsulatedKey
interface HkdfParams
interface HmacImportParams
interface HmacKeyAlgorithm
interface HmacKeyGenParams
interface KeyAlgorithm
interface KmacImportParams
interface KmacKeyAlgorithm
interface KmacKeyGenParams
interface KmacParams
interface Pbkdf2Params
interface RsaHashedImportParams
interface RsaHashedKeyAlgorithm
interface RsaHashedKeyGenParams
interface RsaKeyAlgorithm
interface RsaKeyGenParams
interface RsaOaepParams
interface RsaOtherPrimesInfo
interface RsaPssParams
interface SubtleCrypto
A message recipient uses their asymmetric private key to decrypt an "encapsulated key" (ciphertext), thereby recovering a temporary symmetric key (represented as
ArrayBuffer) which is then used to decrypt a message.The algorithms currently supported include:
'ML-KEM-512''ML-KEM-768''ML-KEM-1024'
@returnsFulfills with
ArrayBufferupon success.- sharedKeyAlgorithm: AlgorithmIdentifier | HmacImportParams | AesDerivedKeyParams | KmacImportParams,extractable: boolean,
A message recipient uses their asymmetric private key to decrypt an "encapsulated key" (ciphertext), thereby recovering a temporary symmetric key (represented as
CryptoKey) which is then used to decrypt a message.The algorithms currently supported include:
'ML-KEM-512''ML-KEM-768''ML-KEM-1024'
@param usagesSee Key usages.
@returnsFulfills with
CryptoKeyupon success. Using the method and parameters specified in
algorithmand the keying material provided bykey, this method attempts to decipher the provideddata. If successful, the returned promise will be resolved with an<ArrayBuffer>containing the plaintext result.The algorithms currently supported include:
'AES-CBC''AES-CTR''AES-GCM''AES-OCB''ChaCha20-Poly1305''RSA-OAEP'
- length?: null | number
Using the method and parameters specified in
algorithmand the keying material provided bybaseKey, this method attempts to generatelengthbits. The Node.js implementation requires that whenlengthis a number it must be multiple of8. Whenlengthisnullthe maximum number of bits for a given algorithm is generated. This is allowed for the'ECDH','X25519', and'X448'algorithms. If successful, the returned promise will be resolved with an<ArrayBuffer>containing the generated data.The algorithms currently supported include:
'Argon2d''Argon2i''Argon2id''ECDH''HKDF''PBKDF2''X25519''X448'
length: number - derivedKeyAlgorithm: AlgorithmIdentifier | HmacImportParams | AesDerivedKeyParams | KmacImportParams,extractable: boolean,
Using the method and parameters specified in
algorithm, and the keying material provided bybaseKey, this method attempts to generate a new <CryptoKey>based on the method and parameters inderivedKeyAlgorithm`.Calling
subtle.deriveKey()is equivalent to callingsubtle.deriveBits()to generate raw keying material, then passing the result into thesubtle.importKey()method using thederiveKeyAlgorithm,extractable, andkeyUsagesparameters as input.The algorithms currently supported include:
'Argon2d''Argon2i''Argon2id''ECDH''HKDF''PBKDF2''X25519''X448'
@param keyUsagesSee Key usages.
Using the method identified by
algorithm,subtle.digest()attempts to generate a digest ofdata. If successful, the returned promise is resolved with an<ArrayBuffer>containing the computed digest.If
algorithmis provided as a<string>, it must be one of:'cSHAKE128''cSHAKE256''SHA-1''SHA-256''SHA-384''SHA-512''SHA3-256''SHA3-384''SHA3-512'
If
algorithmis provided as an<Object>, it must have anameproperty whose value is one of the above.Uses a message recipient's asymmetric public key to encrypt a temporary symmetric key. This encrypted key is the "encapsulated key" represented as
EncapsulatedBits.The algorithms currently supported include:
'ML-KEM-512''ML-KEM-768''ML-KEM-1024'
@returnsFulfills with
EncapsulatedBitsupon success.- sharedKeyAlgorithm: AlgorithmIdentifier | HmacImportParams | AesDerivedKeyParams | KmacImportParams,extractable: boolean,
Uses a message recipient's asymmetric public key to encrypt a temporary symmetric key. This encrypted key is the "encapsulated key" represented as
EncapsulatedKey.The algorithms currently supported include:
'ML-KEM-512''ML-KEM-768''ML-KEM-1024'
@param usagesSee Key usages.
@returnsFulfills with
EncapsulatedKeyupon success. Using the method and parameters specified by
algorithmand the keying material provided bykey, this method attempts to encipherdata. If successful, the returned promise is resolved with an<ArrayBuffer>containing the encrypted result.The algorithms currently supported include:
'AES-CBC''AES-CTR''AES-GCM''AES-OCB''ChaCha20-Poly1305''RSA-OAEP'
- format: 'jwk',
Exports the given key into the specified format, if supported.
If the
<CryptoKey>is not extractable, the returned promise will reject.When
formatis either'pkcs8'or'spki'and the export is successful, the returned promise will be resolved with an<ArrayBuffer>containing the exported key data.When
formatis'jwk'and the export is successful, the returned promise will be resolved with a JavaScript object conforming to the JSON Web Key specification.@param formatMust be one of
'raw','pkcs8','spki','jwk','raw-secret','raw-public', or'raw-seed'.@returns<Promise>containing<ArrayBuffer>.format: 'spki' | 'pkcs8' | 'raw' | 'raw-public' | 'raw-secret' | 'raw-seed', - extractable: boolean,
Using the parameters provided in
algorithm, this method attempts to generate new keying material. Depending on the algorithm used either a singleCryptoKeyor aCryptoKeyPairis generated.The
CryptoKeyPair(public and private key) generating algorithms supported include:'ECDH''ECDSA''Ed25519''Ed448''ML-DSA-44''ML-DSA-65''ML-DSA-87''ML-KEM-512''ML-KEM-768''ML-KEM-1024''RSA-OAEP''RSA-PSS''RSASSA-PKCS1-v1_5''X25519''X448'
The
CryptoKey(secret key) generating algorithms supported include:'AES-CBC''AES-CTR''AES-GCM''AES-KW''AES-OCB''ChaCha20-Poly1305''HMAC''KMAC128''KMAC256'
@param keyUsagesSee Key usages.
extractable: boolean,extractable: boolean, Derives the public key from a given private key.
@param keyA private key from which to derive the corresponding public key.
@param keyUsagesSee Key usages.
@returnsFulfills with a
CryptoKeyupon success.- format: 'jwk',algorithm: AlgorithmIdentifier | RsaHashedImportParams | EcKeyImportParams | HmacImportParams | AesKeyAlgorithm | KmacImportParams,extractable: boolean,
This method attempts to interpret the provided
keyDataas the givenformatto create aCryptoKeyinstance using the providedalgorithm,extractable, andkeyUsagesarguments. If the import is successful, the returned promise will be resolved with a {CryptoKey} representation of the key material.If importing KDF algorithm keys,
extractablemust befalse.@param formatMust be one of
'raw','pkcs8','spki','jwk','raw-secret','raw-public', or'raw-seed'.@param keyUsagesSee Key usages.
format: 'spki' | 'pkcs8' | 'raw' | 'raw-public' | 'raw-secret' | 'raw-seed',algorithm: AlgorithmIdentifier | RsaHashedImportParams | EcKeyImportParams | HmacImportParams | AesKeyAlgorithm | KmacImportParams,extractable: boolean, - sign(
Using the method and parameters given by
algorithmand the keying material provided bykey, this method attempts to generate a cryptographic signature ofdata. If successful, the returned promise is resolved with an<ArrayBuffer>containing the generated signature.The algorithms currently supported include:
'ECDSA''Ed25519''Ed448''HMAC''KMAC128''KMAC256''ML-DSA-44''ML-DSA-65''ML-DSA-87''RSA-PSS''RSASSA-PKCS1-v1_5'
- unwrappedKeyAlgorithm: AlgorithmIdentifier | RsaHashedImportParams | EcKeyImportParams | HmacImportParams | AesKeyAlgorithm | KmacImportParams,extractable: boolean,
In cryptography, "wrapping a key" refers to exporting and then encrypting the keying material. This method attempts to decrypt a wrapped key and create a
<CryptoKey>instance. It is equivalent to callingsubtle.decrypt()first on the encrypted key data (using thewrappedKey,unwrapAlgo, andunwrappingKeyarguments as input) then passing the results in to thesubtle.importKey()method using theunwrappedKeyAlgo,extractable, andkeyUsagesarguments as inputs. If successful, the returned promise is resolved with a<CryptoKey>object.The wrapping algorithms currently supported include:
'AES-CBC''AES-CTR''AES-GCM''AES-KW''AES-OCB''ChaCha20-Poly1305''RSA-OAEP'
The unwrapped key algorithms supported include:
'AES-CBC''AES-CTR''AES-GCM''AES-KW''AES-OCB''ChaCha20-Poly1305''ECDH''ECDSA''Ed25519''Ed448''HMAC''KMAC128''KMAC256''ML-DSA-44''ML-DSA-65''ML-DSA-87''ML-KEM-512''ML-KEM-768''ML-KEM-1024''RSA-OAEP''RSA-PSS''RSASSA-PKCS1-v1_5''X25519''X448'
@param formatMust be one of
'raw','pkcs8','spki','jwk','raw-secret','raw-public', or'raw-seed'.@param keyUsagesSee Key usages.
- ): Promise<boolean>;
Using the method and parameters given in
algorithmand the keying material provided bykey, This method attempts to verify thatsignatureis a valid cryptographic signature ofdata. The returned promise is resolved with eithertrueorfalse.The algorithms currently supported include:
'ECDSA''Ed25519''Ed448''HMAC''KMAC128''KMAC256''ML-DSA-44''ML-DSA-65''ML-DSA-87''RSA-PSS''RSASSA-PKCS1-v1_5'
In cryptography, "wrapping a key" refers to exporting and then encrypting the keying material. This method exports the keying material into the format identified by
format, then encrypts it using the method and parameters specified bywrapAlgoand the keying material provided bywrappingKey. It is the equivalent to callingsubtle.exportKey()usingformatandkeyas the arguments, then passing the result to thesubtle.encrypt()method usingwrappingKeyandwrapAlgoas inputs. If successful, the returned promise will be resolved with an<ArrayBuffer>containing the encrypted key data.The wrapping algorithms currently supported include:
'AES-CBC''AES-CTR''AES-GCM''AES-KW''AES-OCB''ChaCha20-Poly1305''RSA-OAEP'
@param formatMust be one of
'raw','pkcs8','spki','jwk','raw-secret','raw-public', or'raw-seed'.
- type AlgorithmIdentifier = Algorithm | string
- type BigInteger = Uint8Array
- type BufferSource = ArrayBufferView | ArrayBuffer
- type KeyFormat = 'jwk' | 'pkcs8' | 'raw' | 'raw-public' | 'raw-secret' | 'raw-seed' | 'spki'
- type KeyType = 'private' | 'public' | 'secret'
- type KeyUsage = 'encrypt' | 'decrypt' | 'sign' | 'verify' | 'deriveKey' | 'deriveBits' | 'encapsulateBits' | 'decapsulateBits' | 'encapsulateKey' | 'decapsulateKey' | 'wrapKey' | 'unwrapKey'
- type NamedCurve = string
interface Argon2Parameters
- associatedData?: string | ArrayBuffer | ArrayBufferView<ArrayBufferLike>
OPTIONAL, Additional data to be added to the hash, functionally equivalent to salt or secret, but meant for non-random data. If used, must have a length not greater than
2**32-1bytes. - memory: number
REQUIRED, memory cost in 1KiB blocks. Must be greater than
8 * parallelismand less than2**32-1. The actual number of blocks is rounded down to the nearest multiple of4 * parallelism. - message: string | ArrayBuffer | ArrayBufferView<ArrayBufferLike>
REQUIRED, this is the password for password hashing applications of Argon2.
- nonce: string | ArrayBuffer | ArrayBufferView<ArrayBufferLike>
REQUIRED, must be at least 8 bytes long. This is the salt for password hashing applications of Argon2.
- parallelism: number
REQUIRED, degree of parallelism determines how many computational chains (lanes) can be run. Must be greater than 1 and less than
2**24-1. - passes: number
REQUIRED, number of passes (iterations). Must be greater than 1 and less than
2**32-1. - secret?: string | ArrayBuffer | ArrayBufferView<ArrayBufferLike>
OPTIONAL, Random additional input, similar to the salt, that should NOT be stored with the derived key. This is known as pepper in password hashing applications. If used, must have a length not greater than
2**32-1bytes. - tagLength: number
REQUIRED, the length of the key to generate. Must be greater than 4 and less than
2**32-1.
interface AsymmetricKeyDetails
interface BasePrivateKeyEncodingOptions<T extends KeyFormat>
interface CheckPrimeOptions
- checks?: number
The number of Miller-Rabin probabilistic primality iterations to perform. When the value is 0 (zero), a number of checks is used that yields a false positive rate of at most
2**-64for random input. Care must be used when selecting a number of checks. Refer to the OpenSSL documentation for the BN_is_prime_ex function nchecks options for more details.
interface CipherCCM
Instances of the
Cipherivclass are used to encrypt data. The class can be used in one of two ways:- As a
streamthat is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or - Using the
cipher.update()andcipher.final()methods to produce the encrypted data.
The createCipheriv method is used to create
Cipherivinstances.Cipherivobjects are not to be created directly using thenewkeyword.Example: Using
Cipherivobjects as streams:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; // Once we have the key and iv, we can create and use the cipher... const cipher = createCipheriv(algorithm, key, iv); let encrypted = ''; cipher.setEncoding('hex'); cipher.on('data', (chunk) => encrypted += chunk); cipher.on('end', () => console.log(encrypted)); cipher.write('some clear text data'); cipher.end(); }); });Example: Using
Cipherivand piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { pipeline, } from 'node:stream'; const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); const input = createReadStream('test.js'); const output = createWriteStream('test.enc'); pipeline(input, cipher, output, (err) => { if (err) throw err; }); }); });Example: Using the
cipher.update()andcipher.final()methods:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); let encrypted = cipher.update('some clear text data', 'utf8', 'hex'); encrypted += cipher.final('hex'); console.log(encrypted); }); });- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
cipher.final()method has been called, theCipherivobject can no longer be used to encrypt data. Attempts to callcipher.final()more than once will result in an error being thrown.@returnsAny remaining enciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
cipher.final()method has been called, theCipherivobject can no longer be used to encrypt data. Attempts to callcipher.final()more than once will result in an error being thrown.@param outputEncodingThe
encodingof the return value.@returnsAny remaining enciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- autoPadding?: boolean): this;
When using block encryption algorithms, the
Cipherivclass will automatically add padding to the input data to the appropriate block size. To disable the default padding callcipher.setAutoPadding(false).When
autoPaddingisfalse, the length of the entire input data must be a multiple of the cipher's block size orcipher.final()will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using0x0instead of PKCS padding.The
cipher.setAutoPadding()method must be called beforecipher.final().@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - ): NonSharedBuffer;
Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.data: string,): NonSharedBuffer;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.@param outputEncodingThe
encodingof the return value.data: string,): string;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.@param outputEncodingThe
encodingof the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
- As a
interface CipherCCMOptions
- signal?: AbortSignal
When provided the corresponding
AbortControllercan be used to cancel an asynchronous action. - transform?: (this: Transform, chunk: any, encoding: BufferEncoding, callback: TransformCallback) => void
interface CipherChaCha20Poly1305
Instances of the
Cipherivclass are used to encrypt data. The class can be used in one of two ways:- As a
streamthat is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or - Using the
cipher.update()andcipher.final()methods to produce the encrypted data.
The createCipheriv method is used to create
Cipherivinstances.Cipherivobjects are not to be created directly using thenewkeyword.Example: Using
Cipherivobjects as streams:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; // Once we have the key and iv, we can create and use the cipher... const cipher = createCipheriv(algorithm, key, iv); let encrypted = ''; cipher.setEncoding('hex'); cipher.on('data', (chunk) => encrypted += chunk); cipher.on('end', () => console.log(encrypted)); cipher.write('some clear text data'); cipher.end(); }); });Example: Using
Cipherivand piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { pipeline, } from 'node:stream'; const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); const input = createReadStream('test.js'); const output = createWriteStream('test.enc'); pipeline(input, cipher, output, (err) => { if (err) throw err; }); }); });Example: Using the
cipher.update()andcipher.final()methods:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); let encrypted = cipher.update('some clear text data', 'utf8', 'hex'); encrypted += cipher.final('hex'); console.log(encrypted); }); });- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
cipher.final()method has been called, theCipherivobject can no longer be used to encrypt data. Attempts to callcipher.final()more than once will result in an error being thrown.@returnsAny remaining enciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
cipher.final()method has been called, theCipherivobject can no longer be used to encrypt data. Attempts to callcipher.final()more than once will result in an error being thrown.@param outputEncodingThe
encodingof the return value.@returnsAny remaining enciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- autoPadding?: boolean): this;
When using block encryption algorithms, the
Cipherivclass will automatically add padding to the input data to the appropriate block size. To disable the default padding callcipher.setAutoPadding(false).When
autoPaddingisfalse, the length of the entire input data must be a multiple of the cipher's block size orcipher.final()will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using0x0instead of PKCS padding.The
cipher.setAutoPadding()method must be called beforecipher.final().@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - ): NonSharedBuffer;
Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.data: string,): NonSharedBuffer;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.@param outputEncodingThe
encodingof the return value.data: string,): string;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.@param outputEncodingThe
encodingof the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
- As a
interface CipherChaCha20Poly1305Options
- signal?: AbortSignal
When provided the corresponding
AbortControllercan be used to cancel an asynchronous action. - transform?: (this: Transform, chunk: any, encoding: BufferEncoding, callback: TransformCallback) => void
interface CipherGCM
Instances of the
Cipherivclass are used to encrypt data. The class can be used in one of two ways:- As a
streamthat is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or - Using the
cipher.update()andcipher.final()methods to produce the encrypted data.
The createCipheriv method is used to create
Cipherivinstances.Cipherivobjects are not to be created directly using thenewkeyword.Example: Using
Cipherivobjects as streams:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; // Once we have the key and iv, we can create and use the cipher... const cipher = createCipheriv(algorithm, key, iv); let encrypted = ''; cipher.setEncoding('hex'); cipher.on('data', (chunk) => encrypted += chunk); cipher.on('end', () => console.log(encrypted)); cipher.write('some clear text data'); cipher.end(); }); });Example: Using
Cipherivand piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { pipeline, } from 'node:stream'; const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); const input = createReadStream('test.js'); const output = createWriteStream('test.enc'); pipeline(input, cipher, output, (err) => { if (err) throw err; }); }); });Example: Using the
cipher.update()andcipher.final()methods:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); let encrypted = cipher.update('some clear text data', 'utf8', 'hex'); encrypted += cipher.final('hex'); console.log(encrypted); }); });- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
cipher.final()method has been called, theCipherivobject can no longer be used to encrypt data. Attempts to callcipher.final()more than once will result in an error being thrown.@returnsAny remaining enciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
cipher.final()method has been called, theCipherivobject can no longer be used to encrypt data. Attempts to callcipher.final()more than once will result in an error being thrown.@param outputEncodingThe
encodingof the return value.@returnsAny remaining enciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- autoPadding?: boolean): this;
When using block encryption algorithms, the
Cipherivclass will automatically add padding to the input data to the appropriate block size. To disable the default padding callcipher.setAutoPadding(false).When
autoPaddingisfalse, the length of the entire input data must be a multiple of the cipher's block size orcipher.final()will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using0x0instead of PKCS padding.The
cipher.setAutoPadding()method must be called beforecipher.final().@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - ): NonSharedBuffer;
Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.data: string,): NonSharedBuffer;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.@param outputEncodingThe
encodingof the return value.data: string,): string;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.@param outputEncodingThe
encodingof the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
- As a
interface CipherGCMOptions
- signal?: AbortSignal
When provided the corresponding
AbortControllercan be used to cancel an asynchronous action. - transform?: (this: Transform, chunk: any, encoding: BufferEncoding, callback: TransformCallback) => void
interface CipherInfo
interface CipherInfoOptions
interface CipherOCB
Instances of the
Cipherivclass are used to encrypt data. The class can be used in one of two ways:- As a
streamthat is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or - Using the
cipher.update()andcipher.final()methods to produce the encrypted data.
The createCipheriv method is used to create
Cipherivinstances.Cipherivobjects are not to be created directly using thenewkeyword.Example: Using
Cipherivobjects as streams:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; // Once we have the key and iv, we can create and use the cipher... const cipher = createCipheriv(algorithm, key, iv); let encrypted = ''; cipher.setEncoding('hex'); cipher.on('data', (chunk) => encrypted += chunk); cipher.on('end', () => console.log(encrypted)); cipher.write('some clear text data'); cipher.end(); }); });Example: Using
Cipherivand piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { pipeline, } from 'node:stream'; const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); const input = createReadStream('test.js'); const output = createWriteStream('test.enc'); pipeline(input, cipher, output, (err) => { if (err) throw err; }); }); });Example: Using the
cipher.update()andcipher.final()methods:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); let encrypted = cipher.update('some clear text data', 'utf8', 'hex'); encrypted += cipher.final('hex'); console.log(encrypted); }); });- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
cipher.final()method has been called, theCipherivobject can no longer be used to encrypt data. Attempts to callcipher.final()more than once will result in an error being thrown.@returnsAny remaining enciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
cipher.final()method has been called, theCipherivobject can no longer be used to encrypt data. Attempts to callcipher.final()more than once will result in an error being thrown.@param outputEncodingThe
encodingof the return value.@returnsAny remaining enciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- autoPadding?: boolean): this;
When using block encryption algorithms, the
Cipherivclass will automatically add padding to the input data to the appropriate block size. To disable the default padding callcipher.setAutoPadding(false).When
autoPaddingisfalse, the length of the entire input data must be a multiple of the cipher's block size orcipher.final()will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using0x0instead of PKCS padding.The
cipher.setAutoPadding()method must be called beforecipher.final().@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - ): NonSharedBuffer;
Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.data: string,): NonSharedBuffer;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.@param outputEncodingThe
encodingof the return value.data: string,): string;Updates the cipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer,TypedArray, orDataView. Ifdatais aBuffer,TypedArray, orDataView, theninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
cipher.update()method can be called multiple times with new data untilcipher.final()is called. Callingcipher.update()aftercipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof the data.@param outputEncodingThe
encodingof the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
- As a
interface CipherOCBOptions
- signal?: AbortSignal
When provided the corresponding
AbortControllercan be used to cancel an asynchronous action. - transform?: (this: Transform, chunk: any, encoding: BufferEncoding, callback: TransformCallback) => void
interface DecipherCCM
Instances of the
Decipherivclass are used to decrypt data. The class can be used in one of two ways:- As a
streamthat is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or - Using the
decipher.update()anddecipher.final()methods to produce the unencrypted data.
The createDecipheriv method is used to create
Decipherivinstances.Decipherivobjects are not to be created directly using thenewkeyword.Example: Using
Decipherivobjects as streams:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Key length is dependent on the algorithm. In this case for aes192, it is // 24 bytes (192 bits). // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); let decrypted = ''; decipher.on('readable', () => { let chunk; while (null !== (chunk = decipher.read())) { decrypted += chunk.toString('utf8'); } }); decipher.on('end', () => { console.log(decrypted); // Prints: some clear text data }); // Encrypted with same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; decipher.write(encrypted, 'hex'); decipher.end();Example: Using
Decipherivand piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); const input = createReadStream('test.enc'); const output = createWriteStream('test.js'); input.pipe(decipher).pipe(output);Example: Using the
decipher.update()anddecipher.final()methods:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); // Encrypted using same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; let decrypted = decipher.update(encrypted, 'hex', 'utf8'); decrypted += decipher.final('utf8'); console.log(decrypted); // Prints: some clear text data- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
decipher.final()method has been called, theDecipherivobject can no longer be used to decrypt data. Attempts to calldecipher.final()more than once will result in an error being thrown.@returnsAny remaining deciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
decipher.final()method has been called, theDecipherivobject can no longer be used to decrypt data. Attempts to calldecipher.final()more than once will result in an error being thrown.@param outputEncodingThe
encodingof the return value.@returnsAny remaining deciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- auto_padding?: boolean): this;
When data has been encrypted without standard block padding, calling
decipher.setAutoPadding(false)will disable automatic padding to preventdecipher.final()from checking for and removing padding.Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.
The
decipher.setAutoPadding()method must be called beforedecipher.final().@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - data: ArrayBufferView): NonSharedBuffer;
Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.data: string,): NonSharedBuffer;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.@param outputEncodingThe
encodingof the return value.data: string,): string;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.@param outputEncodingThe
encodingof the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
- As a
interface DecipherChaCha20Poly1305
Instances of the
Decipherivclass are used to decrypt data. The class can be used in one of two ways:- As a
streamthat is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or - Using the
decipher.update()anddecipher.final()methods to produce the unencrypted data.
The createDecipheriv method is used to create
Decipherivinstances.Decipherivobjects are not to be created directly using thenewkeyword.Example: Using
Decipherivobjects as streams:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Key length is dependent on the algorithm. In this case for aes192, it is // 24 bytes (192 bits). // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); let decrypted = ''; decipher.on('readable', () => { let chunk; while (null !== (chunk = decipher.read())) { decrypted += chunk.toString('utf8'); } }); decipher.on('end', () => { console.log(decrypted); // Prints: some clear text data }); // Encrypted with same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; decipher.write(encrypted, 'hex'); decipher.end();Example: Using
Decipherivand piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); const input = createReadStream('test.enc'); const output = createWriteStream('test.js'); input.pipe(decipher).pipe(output);Example: Using the
decipher.update()anddecipher.final()methods:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); // Encrypted using same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; let decrypted = decipher.update(encrypted, 'hex', 'utf8'); decrypted += decipher.final('utf8'); console.log(decrypted); // Prints: some clear text data- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
decipher.final()method has been called, theDecipherivobject can no longer be used to decrypt data. Attempts to calldecipher.final()more than once will result in an error being thrown.@returnsAny remaining deciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
decipher.final()method has been called, theDecipherivobject can no longer be used to decrypt data. Attempts to calldecipher.final()more than once will result in an error being thrown.@param outputEncodingThe
encodingof the return value.@returnsAny remaining deciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- auto_padding?: boolean): this;
When data has been encrypted without standard block padding, calling
decipher.setAutoPadding(false)will disable automatic padding to preventdecipher.final()from checking for and removing padding.Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.
The
decipher.setAutoPadding()method must be called beforedecipher.final().@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - data: ArrayBufferView): NonSharedBuffer;
Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.data: string,): NonSharedBuffer;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.@param outputEncodingThe
encodingof the return value.data: string,): string;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.@param outputEncodingThe
encodingof the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
- As a
interface DecipherGCM
Instances of the
Decipherivclass are used to decrypt data. The class can be used in one of two ways:- As a
streamthat is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or - Using the
decipher.update()anddecipher.final()methods to produce the unencrypted data.
The createDecipheriv method is used to create
Decipherivinstances.Decipherivobjects are not to be created directly using thenewkeyword.Example: Using
Decipherivobjects as streams:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Key length is dependent on the algorithm. In this case for aes192, it is // 24 bytes (192 bits). // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); let decrypted = ''; decipher.on('readable', () => { let chunk; while (null !== (chunk = decipher.read())) { decrypted += chunk.toString('utf8'); } }); decipher.on('end', () => { console.log(decrypted); // Prints: some clear text data }); // Encrypted with same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; decipher.write(encrypted, 'hex'); decipher.end();Example: Using
Decipherivand piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); const input = createReadStream('test.enc'); const output = createWriteStream('test.js'); input.pipe(decipher).pipe(output);Example: Using the
decipher.update()anddecipher.final()methods:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); // Encrypted using same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; let decrypted = decipher.update(encrypted, 'hex', 'utf8'); decrypted += decipher.final('utf8'); console.log(decrypted); // Prints: some clear text data- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
decipher.final()method has been called, theDecipherivobject can no longer be used to decrypt data. Attempts to calldecipher.final()more than once will result in an error being thrown.@returnsAny remaining deciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
decipher.final()method has been called, theDecipherivobject can no longer be used to decrypt data. Attempts to calldecipher.final()more than once will result in an error being thrown.@param outputEncodingThe
encodingof the return value.@returnsAny remaining deciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- auto_padding?: boolean): this;
When data has been encrypted without standard block padding, calling
decipher.setAutoPadding(false)will disable automatic padding to preventdecipher.final()from checking for and removing padding.Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.
The
decipher.setAutoPadding()method must be called beforedecipher.final().@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - data: ArrayBufferView): NonSharedBuffer;
Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.data: string,): NonSharedBuffer;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.@param outputEncodingThe
encodingof the return value.data: string,): string;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.@param outputEncodingThe
encodingof the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
- As a
interface DecipherOCB
Instances of the
Decipherivclass are used to decrypt data. The class can be used in one of two ways:- As a
streamthat is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or - Using the
decipher.update()anddecipher.final()methods to produce the unencrypted data.
The createDecipheriv method is used to create
Decipherivinstances.Decipherivobjects are not to be created directly using thenewkeyword.Example: Using
Decipherivobjects as streams:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Key length is dependent on the algorithm. In this case for aes192, it is // 24 bytes (192 bits). // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); let decrypted = ''; decipher.on('readable', () => { let chunk; while (null !== (chunk = decipher.read())) { decrypted += chunk.toString('utf8'); } }); decipher.on('end', () => { console.log(decrypted); // Prints: some clear text data }); // Encrypted with same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; decipher.write(encrypted, 'hex'); decipher.end();Example: Using
Decipherivand piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); const input = createReadStream('test.enc'); const output = createWriteStream('test.js'); input.pipe(decipher).pipe(output);Example: Using the
decipher.update()anddecipher.final()methods:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); // Encrypted using same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; let decrypted = decipher.update(encrypted, 'hex', 'utf8'); decrypted += decipher.final('utf8'); console.log(decrypted); // Prints: some clear text data- allowHalfOpen: boolean
If
falsethen the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpenconstructor option, which defaults totrue.This can be changed manually to change the half-open behavior of an existing
Duplexstream instance, but must be changed before the'end'event is emitted. - readable: boolean
Is
trueif it is safe to call read, which means the stream has not been destroyed or emitted'error'or'end'. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encodingof a givenReadablestream. Theencodingproperty can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readablestream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMarkpassed when creating thisReadable. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark. - readonly writable: boolean
Is
trueif it is safe to callwritable.write(), which means the stream has not been destroyed, errored, or ended. - readonly writableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'finish'. - readonly writableCorked: number
Number of times
writable.uncork()needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
trueafterwritable.end()has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinishedinstead. - readonly writableHighWaterMark: number
Return the value of
highWaterMarkpassed when creating thisWritable. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark. - readonly writableNeedDrain: boolean
Is
trueif the stream's buffer has been full and stream will emit'drain'. Calls
readable.destroy()with anAbortErrorand returns a promise that fulfills when the stream is finished.- @returns
AsyncIteratorto fully consume the stream. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]. The first index value is0and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork()is called, which will pass them all towritable._writev(), if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()without implementingwritable._writev()may have an adverse effect on throughput.See also:
writable.uncork(),writable._writev().- ): this;
Destroy the stream. Optionally emit an
'error'event, and emit a'close'event (unlessemitCloseis set tofalse). After this call, the readable stream will release any internal resources and subsequent calls topush()will be ignored.Once
destroy()has been called any further calls will be a no-op and no further errors except from_destroy()may be emitted as'error'.Implementors should not override this method, but instead implement
readable._destroy().@param errorError which will be passed as payload in
'error'event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName, in the order they were registered, passing the supplied arguments to each.Returns
trueif the event had listeners,falseotherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener - end(cb?: () => void): this;
Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!end(chunk: any,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()method signals that no more data will be written to theWritable. The optionalchunkandencodingarguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding if
chunkis a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbols.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]- ): Promise<boolean>;
This method is similar to
Array.prototype.everyand calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawaited return value is falsy, the stream is destroyed and the promise is fulfilled withfalse. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
awaited.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
decipher.final()method has been called, theDecipherivobject can no longer be used to decrypt data. Attempts to calldecipher.final()more than once will result in an error being thrown.@returnsAny remaining deciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
decipher.final()method has been called, theDecipherivobject can no longer be used to decrypt data. Attempts to calldecipher.final()more than once will result in an error being thrown.@param outputEncodingThe
encodingof the return value.@returnsAny remaining deciphered contents. If
outputEncodingis specified, a string is returned. If anoutputEncodingis not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found.find(): Promise<any>;This method is similar to
Array.prototype.findand calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefinedif no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
awaited.This method is different from
for await...ofloops in that it can optionally process chunks concurrently. In addition, aforEachiteration can only be stopped by having passed asignaloption and aborting the related AbortController whilefor await...ofcan be stopped withbreakorreturn. In either case the stream will be destroyed.This method is different from listening to the
'data'event in that it uses thereadableevent in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitterwhich is either set byemitter.setMaxListeners(n)or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe()method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...ofloop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName. Iflisteneris provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ] - map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
awaited before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener(). - on(event: 'close',listener: () => void): this;
Adds the
listenerfunction to the end of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventName. The next timeeventNameis triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a@param listenerThe callback function
The
readable.pause()method will cause a stream in flowing mode to stop emitting'data'events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });The
readable.pause()method has no effect if there is a'readable'event listener.- event: 'close',listener: () => void): this;
Adds the
listenerfunction to the beginning of the listeners array for the event namedeventName. No checks are made to see if thelistenerhas already been added. Multiple calls passing the same combination ofeventNameandlistenerwill result in thelistenerbeing added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listenerfunction for the event namedeventNameto the beginning of the listeners array. The next timeeventNameis triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });Returns a reference to the
EventEmitter, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName, including any wrappers (such as those created by.once()).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log'); - read(size?: number): any;
The
readable.read()method reads data out of the internal buffer and returns it. If no data is available to be read,nullis returned. By default, the data is returned as aBufferobject unless an encoding has been specified using thereadable.setEncoding()method or the stream is operating in object mode.The optional
sizeargument specifies a specific number of bytes to read. Ifsizebytes are not available to be read,nullwill be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
sizeargument is not specified, all of the data contained in the internal buffer will be returned.The
sizeargument must be less than or equal to 1 GiB.The
readable.read()method should only be called onReadablestreams operating in paused mode. In flowing mode,readable.read()is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });Each call to
readable.read()returns a chunk of data, ornull. The chunks are not concatenated. Awhileloop is necessary to consume all data currently in the buffer. When reading a large file.read()may returnnull, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'event will be emitted when there is more data in the buffer. Finally the'end'event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable, it is necessary to collect chunks across multiple'readable'events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });A
Readablestream in object mode will always return a single item from a call toreadable.read(size), regardless of the value of thesizeargument.If the
readable.read()method returns a chunk of data, a'data'event will also be emitted.Calling read after the
'end'event has been emitted will returnnull. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeErrorwith theERR_INVALID_ARGScode property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.mapmethod.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitterinstance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listenerfrom the listener array for the event namedeventName.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);removeListener()will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName, thenremoveListener()must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()orremoveAllListeners()calls after emitting and before the last listener finishes execution will not remove them fromemit()in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // ABecause listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()will remove the most recently added instance. In the example theonce('ping')listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');Returns a reference to the
EventEmitter, so that calls can be chained. The
readable.resume()method causes an explicitly pausedReadablestream to resume emitting'data'events, switching the stream into flowing mode.The
readable.resume()method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });The
readable.resume()method has no effect if there is a'readable'event listener.- auto_padding?: boolean): this;
When data has been encrypted without standard block padding, calling
decipher.setAutoPadding(false)will disable automatic padding to preventdecipher.final()from checking for and removing padding.Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.
The
decipher.setAutoPadding()method must be called beforedecipher.final().@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()method sets the defaultencodingfor aWritablestream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()method sets the character encoding for data read from theReadablestream.By default, no encoding is assigned and stream data will be returned as
Bufferobjects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8')will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')will cause the data to be encoded in hexadecimal string format.The
Readablestream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBufferobjects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitters will print a warning if more than10listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()method allows the limit to be modified for this specificEventEmitterinstance. The value can be set toInfinity(or0) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.someand calls fn on each chunk in the stream until the awaited return value istrue(or any truthy value). Once an fn call on a chunkawaited return value is truthy, the stream is destroyed and the promise is fulfilled withtrue. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
trueif fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()method flushes all data buffered since cork was called.When using
writable.cork()andwritable.uncork()to manage the buffering of writes to a stream, defer calls towritable.uncork()usingprocess.nextTick(). Doing so allows batching of allwritable.write()calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());If the
writable.cork()method is called multiple times on a stream, the same number of calls towritable.uncork()must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });See also:
writable.cork().- destination?: WritableStream): this;
The
readable.unpipe()method detaches aWritablestream previously attached using the pipe method.If the
destinationis not specified, then all pipes are detached.If the
destinationis specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunkasnullsignals the end of the stream (EOF) and behaves the same asreadable.push(null), after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)method cannot be called after the'end'event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()often should consider switching to use of aTransformstream instead. See theAPI for stream implementerssection for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }Unlike push,
stream.unshift(chunk)will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray}, {DataView} ornull. For object mode streams,chunkmay be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Bufferencoding, such as'utf8'or'ascii'. - data: ArrayBufferView): NonSharedBuffer;
Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.data: string,): NonSharedBuffer;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.@param outputEncodingThe
encodingof the return value.data: string,): string;Updates the decipher with
data. If theinputEncodingargument is given, thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,datamust be aBuffer. Ifdatais aBuffertheninputEncodingis ignored.The
outputEncodingspecifies the output format of the enciphered data. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncodingis provided, aBufferis returned.The
decipher.update()method can be called multiple times with new data untildecipher.final()is called. Callingdecipher.update()afterdecipher.final()will result in an error being thrown.@param inputEncodingThe
encodingof thedatastring.@param outputEncodingThe
encodingof the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:streammodule API as it is currently defined. (SeeCompatibilityfor more information.)When using an older Node.js library that emits
'data'events and has a pause method that is advisory only, thereadable.wrap()method can be used to create aReadablestream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()method writes some data to the stream, and calls the suppliedcallbackonce the data has been fully handled. If an error occurs, thecallbackwill be called with the error as its first argument. Thecallbackis called asynchronously and before'error'is emitted.The return value is
trueif the internal buffer is less than thehighWaterMarkconfigured when the stream was created after admittingchunk. Iffalseis returned, further attempts to write data to the stream should stop until the'drain'event is emitted.While a stream is not draining, calls to
write()will bufferchunk, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'event will be emitted. Oncewrite()returns false, do not write more chunks until the'drain'event is emitted. While callingwrite()on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform, because theTransformstreams are paused by default until they are piped or a'data'or'readable'event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readableand use pipe. However, if callingwrite()is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });A
Writablestream in object mode will always ignore theencodingargument.@param chunkOptional data to write. For streams not operating in object mode,
chunkmust be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunkmay be any JavaScript value other thannull.@param encodingThe encoding, if
chunkis a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalseif the stream wishes for the calling code to wait for the'drain'event to be emitted before continuing to write additional data; otherwisetrue.
- As a
interface DiffieHellmanGroupConstructor
interface DSAKeyPairKeyObjectOptions
interface DSAKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface ECKeyPairKeyObjectOptions
interface ECKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface ED25519KeyPairKeyObjectOptions
interface ED25519KeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface ED448KeyPairKeyObjectOptions
interface ED448KeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface GeneratePrimeOptions
interface GeneratePrimeOptionsArrayBuffer
interface GeneratePrimeOptionsBigInt
interface HashOptions
- outputLength?: number
For XOF hash functions such as
shake256, the outputLength option can be used to specify the desired output length in bytes. - signal?: AbortSignal
When provided the corresponding
AbortControllercan be used to cancel an asynchronous action. - transform?: (this: Transform, chunk: any, encoding: BufferEncoding, callback: TransformCallback) => void
interface JsonWebKeyInput
interface JwkKeyExportOptions
interface KeyExportOptions<T extends KeyFormat>
interface KeyPairSyncResult<T1 extends string | Buffer, T2 extends string | Buffer>
interface MLDSAKeyPairKeyObjectOptions
interface MLDSAKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface MLKEMKeyPairKeyObjectOptions
interface MLKEMKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface OneShotDigestOptions
- outputLength?: number
For XOF hash functions such as 'shake256', the outputLength option can be used to specify the desired output length in bytes.
interface OneShotDigestOptionsWithBufferEncoding
- outputLength?: number
For XOF hash functions such as 'shake256', the outputLength option can be used to specify the desired output length in bytes.
interface OneShotDigestOptionsWithStringEncoding
- outputLength?: number
For XOF hash functions such as 'shake256', the outputLength option can be used to specify the desired output length in bytes.
interface PrivateKeyInput
interface RandomUUIDOptions
- disableEntropyCache?: boolean
By default, to improve performance, Node.js will pre-emptively generate and persistently cache enough random data to generate up to 128 random UUIDs. To generate a UUID without using the cache, set
disableEntropyCachetotrue.
interface RSAKeyPairKeyObjectOptions
interface RSAKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface RsaPrivateKey
interface RSAPSSKeyPairKeyObjectOptions
interface RSAPSSKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface RsaPublicKey
interface ScryptOptions
interface SecureHeapUsage
interface SigningOptions
interface SignJsonWebKeyInput
interface SignKeyObjectInput
interface SignPrivateKeyInput
interface SLHDSAKeyPairKeyObjectOptions
interface SLHDSAKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface VerifyJsonWebKeyInput
interface VerifyKeyObjectInput
interface VerifyPublicKeyInput
interface X25519KeyPairKeyObjectOptions
interface X25519KeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface X448KeyPairKeyObjectOptions
interface X448KeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface X509CheckOptions
- type Argon2Algorithm = 'argon2d' | 'argon2i' | 'argon2id'
- type BinaryLike = string | NodeJS.ArrayBufferView
- type BinaryToTextEncoding = 'base64' | 'base64url' | 'hex' | 'binary'
- type CharacterEncoding = 'utf8' | 'utf-8' | 'utf16le' | 'utf-16le' | 'latin1'
- type CipherCCMTypes = 'aes-128-ccm' | 'aes-192-ccm' | 'aes-256-ccm'
- type CipherChaCha20Poly1305Types = 'chacha20-poly1305'
- type CipherGCMTypes = 'aes-128-gcm' | 'aes-192-gcm' | 'aes-256-gcm'
- type CipherKey = BinaryLike | KeyObject
- type CipherMode = 'cbc' | 'ccm' | 'cfb' | 'ctr' | 'ecb' | 'gcm' | 'ocb' | 'ofb' | 'stream' | 'wrap' | 'xts'
- type CipherOCBTypes = 'aes-128-ocb' | 'aes-192-ocb' | 'aes-256-ocb'
- type DiffieHellmanGroup = Omit<DiffieHellman, 'setPublicKey' | 'setPrivateKey'>
- type DSAEncoding = 'der' | 'ieee-p1363'
- type ECDHKeyFormat = 'compressed' | 'uncompressed' | 'hybrid'
- type KeyFormat = 'pem' | 'der' | 'jwk'
- type KeyObjectType = 'secret' | 'public' | 'private'
- type KeyType = 'dh' | 'dsa' | 'ec' | 'ed25519' | 'ed448' | 'ml-dsa-44' | 'ml-dsa-65' | 'ml-dsa-87' | 'ml-kem-1024' | 'ml-kem-512' | 'ml-kem-768' | 'rsa-pss' | 'rsa' | 'slh-dsa-sha2-128f' | 'slh-dsa-sha2-128s' | 'slh-dsa-sha2-192f' | 'slh-dsa-sha2-192s' | 'slh-dsa-sha2-256f' | 'slh-dsa-sha2-256s' | 'slh-dsa-shake-128f' | 'slh-dsa-shake-128s' | 'slh-dsa-shake-192f' | 'slh-dsa-shake-192s' | 'slh-dsa-shake-256f' | 'slh-dsa-shake-256s' | 'x25519' | 'x448'
- type LargeNumberLike = NodeJS.ArrayBufferView | SharedArrayBuffer | ArrayBuffer | bigint
- type LegacyCharacterEncoding = 'ascii' | 'binary' | 'ucs2' | 'ucs-2'
- type UUID = `${string}-${string}-${string}-${string}-${string}`