Specifies the active default cipher list used by the current Node.js process (colon-separated values).
Node.js module
crypto
The 'node:crypto'
module provides cryptographic functionality, including wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, verify, and key derivation functions.
It supports common algorithms such as SHA-256, AES, RSA, ECDH, and more. The module also offers secure random number generation, key management, and certificate handling, making it essential for implementing secure protocols and data encryption.
Works in Bun
Most crypto functionality is implemented, but some specific methods related to engine configuration, FIPS mode, and secure heap usage are missing.
namespace constants
Specifies the built-in default cipher list used by Node.js (colon-separated values).
Causes the salt length for RSA_PKCS1_PSS_PADDING to be determined automatically when verifying a signature.
Sets the salt length for RSA_PKCS1_PSS_PADDING to the digest size when signing or verifying.
Sets the salt length for RSA_PKCS1_PSS_PADDING to the maximum permissible value when signing data.
Applies multiple bug workarounds within OpenSSL. See https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_options.html for detail.
Instructs OpenSSL to allow a non-[EC]DHE-based key exchange mode for TLS v1.3
Allows legacy insecure renegotiation between OpenSSL and unpatched clients or servers. See https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_options.html.
Attempts to use the server's preferences instead of the client's when selecting a cipher. See https://www.openssl.org/docs/man1.0.2/ssl/SSL_CTX_set_options.html.
Instructs OpenSSL to use Cisco's version identifier of DTLS_BAD_VER.
Instructs OpenSSL to turn on cookie exchange.
Instructs OpenSSL to add server-hello extension from an early version of the cryptopro draft.
Instructs OpenSSL to disable a SSL 3.0/TLS 1.0 vulnerability workaround added in OpenSSL 0.9.6d.
Allows initial connection to servers that do not support RI.
Instructs OpenSSL to disable support for SSL/TLS compression.
Instructs OpenSSL to disable encrypt-then-MAC.
Instructs OpenSSL to disable renegotiation.
Instructs OpenSSL to always start a new session when performing renegotiation.
Instructs OpenSSL to turn off SSL v2
Instructs OpenSSL to turn off SSL v3
Instructs OpenSSL to disable use of RFC4507bis tickets.
Instructs OpenSSL to turn off TLS v1
Instructs OpenSSL to turn off TLS v1.1
Instructs OpenSSL to turn off TLS v1.2
Instructs OpenSSL to turn off TLS v1.3
Instructs OpenSSL server to prioritize ChaCha20-Poly1305 when the client does. This option has no effect if
SSL_OP_CIPHER_SERVER_PREFERENCE
is not enabled.Instructs OpenSSL to disable version rollback attack detection.
class Cipher
Instances of the
Cipher
class are used to encrypt data. The class can be used in one of two ways:- As a
stream
that is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or - Using the
cipher.update()
andcipher.final()
methods to produce the encrypted data.
The createCipheriv method is used to create
Cipher
instances.Cipher
objects are not to be created directly using thenew
keyword.Example: Using
Cipher
objects as streams:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; // Once we have the key and iv, we can create and use the cipher... const cipher = createCipheriv(algorithm, key, iv); let encrypted = ''; cipher.setEncoding('hex'); cipher.on('data', (chunk) => encrypted += chunk); cipher.on('end', () => console.log(encrypted)); cipher.write('some clear text data'); cipher.end(); }); });
Example: Using
Cipher
and piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { pipeline, } from 'node:stream'; const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); const input = createReadStream('test.js'); const output = createWriteStream('test.enc'); pipeline(input, cipher, output, (err) => { if (err) throw err; }); }); });
Example: Using the
cipher.update()
andcipher.final()
methods:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); let encrypted = cipher.update('some clear text data', 'utf8', 'hex'); encrypted += cipher.final('hex'); console.log(encrypted); }); });
- allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. - static captureRejections: boolean
Value: boolean
Change the default
captureRejections
option on all newEventEmitter
objects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')
See how to write a custom
rejection handler
. - static defaultMaxListeners: number
By default, a maximum of
10
listeners can be registered for any single event. This limit can be changed for individualEventEmitter
instances using theemitter.setMaxListeners(n)
method. To change the default for allEventEmitter
instances, theevents.defaultMaxListeners
property can be used. If this value is not a positive number, aRangeError
is thrown.Take caution when setting the
events.defaultMaxListeners
because the change affects allEventEmitter
instances, including those created before the change is made. However, callingemitter.setMaxListeners(n)
still has precedence overevents.defaultMaxListeners
.This is not a hard limit. The
EventEmitter
instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter
, theemitter.getMaxListeners()
andemitter.setMaxListeners()
methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });
The
--trace-warnings
command-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')
and will have the additionalemitter
,type
, andcount
properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsname
property is set to'MaxListenersExceededWarning'
. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'
events. Listeners installed using this symbol are called before the regular'error'
listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'
event is emitted. Therefore, the process will still crash if no regular'error'
listener is installed. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
cipher.final()
method has been called, theCipher
object can no longer be used to encrypt data. Attempts to callcipher.final()
more than once will result in an error being thrown.@returnsAny remaining enciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
cipher.final()
method has been called, theCipher
object can no longer be used to encrypt data. Attempts to callcipher.final()
more than once will result in an error being thrown.@param outputEncodingThe
encoding
of the return value.@returnsAny remaining enciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- autoPadding?: boolean): this;
When using block encryption algorithms, the
Cipher
class will automatically add padding to the input data to the appropriate block size. To disable the default padding callcipher.setAutoPadding(false)
.When
autoPadding
isfalse
, the length of the entire input data must be a multiple of the cipher's block size orcipher.final()
will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using0x0
instead of PKCS padding.The
cipher.setAutoPadding()
method must be called beforecipher.final()
.@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.data: string,Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.@param outputEncodingThe
encoding
of the return value.data: string,): string;Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.@param outputEncodingThe
encoding
of the return value.- wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
. - ): Disposable;
Listens once to the
abort
event on the providedsignal
.Listening to the
abort
event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation()
. Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignal
s in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagation
does not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }
@returnsDisposable that removes the
abort
listener. - src: string | Object | Stream | ArrayBuffer | Blob | Iterable<any, any, any> | AsyncIterable<any, any, any> | AsyncGeneratorFunction | Promise<any>
A utility method for creating duplex streams.
Stream
converts writable stream into writableDuplex
and readable stream toDuplex
.Blob
converts into readableDuplex
.string
converts into readableDuplex
.ArrayBuffer
converts into readableDuplex
.AsyncIterable
converts into a readableDuplex
. Cannot yieldnull
.AsyncGeneratorFunction
converts into a readable/writable transformDuplex
. Must take a sourceAsyncIterable
as first parameter. Cannot yieldnull
.AsyncFunction
converts into a writableDuplex
. Must return eithernull
orundefined
Object ({ writable, readable })
convertsreadable
andwritable
intoStream
and then combines them intoDuplex
where theDuplex
will write to thewritable
and read from thereadable
.Promise
converts into readableDuplex
. Valuenull
is ignored.
- options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
A utility method for creating a
Duplex
from a webReadableStream
andWritableStream
. - name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.For
EventEmitter
s this behaves exactly the same as calling.listeners
on the emitter.For
EventTarget
s this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] }
- ): number;
Returns the currently set max amount of listeners.
For
EventEmitter
s this behaves exactly the same as calling.getMaxListeners
on the emitter.For
EventTarget
s this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 }
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
eventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
- n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);
@param nA non-negative number. The maximum number of listeners per
EventTarget
event.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
n
is set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
ReadableStream
andWritableStream
from aDuplex
.
- As a
class Decipher
Instances of the
Decipher
class are used to decrypt data. The class can be used in one of two ways:- As a
stream
that is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or - Using the
decipher.update()
anddecipher.final()
methods to produce the unencrypted data.
The createDecipheriv method is used to create
Decipher
instances.Decipher
objects are not to be created directly using thenew
keyword.Example: Using
Decipher
objects as streams:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Key length is dependent on the algorithm. In this case for aes192, it is // 24 bytes (192 bits). // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); let decrypted = ''; decipher.on('readable', () => { let chunk; while (null !== (chunk = decipher.read())) { decrypted += chunk.toString('utf8'); } }); decipher.on('end', () => { console.log(decrypted); // Prints: some clear text data }); // Encrypted with same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; decipher.write(encrypted, 'hex'); decipher.end();
Example: Using
Decipher
and piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); const input = createReadStream('test.enc'); const output = createWriteStream('test.js'); input.pipe(decipher).pipe(output);
Example: Using the
decipher.update()
anddecipher.final()
methods:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); // Encrypted using same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; let decrypted = decipher.update(encrypted, 'hex', 'utf8'); decrypted += decipher.final('utf8'); console.log(decrypted); // Prints: some clear text data
- allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. - static captureRejections: boolean
Value: boolean
Change the default
captureRejections
option on all newEventEmitter
objects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')
See how to write a custom
rejection handler
. - static defaultMaxListeners: number
By default, a maximum of
10
listeners can be registered for any single event. This limit can be changed for individualEventEmitter
instances using theemitter.setMaxListeners(n)
method. To change the default for allEventEmitter
instances, theevents.defaultMaxListeners
property can be used. If this value is not a positive number, aRangeError
is thrown.Take caution when setting the
events.defaultMaxListeners
because the change affects allEventEmitter
instances, including those created before the change is made. However, callingemitter.setMaxListeners(n)
still has precedence overevents.defaultMaxListeners
.This is not a hard limit. The
EventEmitter
instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter
, theemitter.getMaxListeners()
andemitter.setMaxListeners()
methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });
The
--trace-warnings
command-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')
and will have the additionalemitter
,type
, andcount
properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsname
property is set to'MaxListenersExceededWarning'
. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'
events. Listeners installed using this symbol are called before the regular'error'
listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'
event is emitted. Therefore, the process will still crash if no regular'error'
listener is installed. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
decipher.final()
method has been called, theDecipher
object can no longer be used to decrypt data. Attempts to calldecipher.final()
more than once will result in an error being thrown.@returnsAny remaining deciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
decipher.final()
method has been called, theDecipher
object can no longer be used to decrypt data. Attempts to calldecipher.final()
more than once will result in an error being thrown.@param outputEncodingThe
encoding
of the return value.@returnsAny remaining deciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- auto_padding?: boolean): this;
When data has been encrypted without standard block padding, calling
decipher.setAutoPadding(false)
will disable automatic padding to preventdecipher.final()
from checking for and removing padding.Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.
The
decipher.setAutoPadding()
method must be called beforedecipher.final()
.@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. - data: ArrayBufferView
Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.data: string,Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.@param outputEncodingThe
encoding
of the return value.data: string,): string;Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.@param outputEncodingThe
encoding
of the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
. - ): Disposable;
Listens once to the
abort
event on the providedsignal
.Listening to the
abort
event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation()
. Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignal
s in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagation
does not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }
@returnsDisposable that removes the
abort
listener. - src: string | Object | Stream | ArrayBuffer | Blob | Iterable<any, any, any> | AsyncIterable<any, any, any> | AsyncGeneratorFunction | Promise<any>
A utility method for creating duplex streams.
Stream
converts writable stream into writableDuplex
and readable stream toDuplex
.Blob
converts into readableDuplex
.string
converts into readableDuplex
.ArrayBuffer
converts into readableDuplex
.AsyncIterable
converts into a readableDuplex
. Cannot yieldnull
.AsyncGeneratorFunction
converts into a readable/writable transformDuplex
. Must take a sourceAsyncIterable
as first parameter. Cannot yieldnull
.AsyncFunction
converts into a writableDuplex
. Must return eithernull
orundefined
Object ({ writable, readable })
convertsreadable
andwritable
intoStream
and then combines them intoDuplex
where theDuplex
will write to thewritable
and read from thereadable
.Promise
converts into readableDuplex
. Valuenull
is ignored.
- options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
A utility method for creating a
Duplex
from a webReadableStream
andWritableStream
. - name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.For
EventEmitter
s this behaves exactly the same as calling.listeners
on the emitter.For
EventTarget
s this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] }
- ): number;
Returns the currently set max amount of listeners.
For
EventEmitter
s this behaves exactly the same as calling.getMaxListeners
on the emitter.For
EventTarget
s this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 }
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
eventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
- n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);
@param nA non-negative number. The maximum number of listeners per
EventTarget
event.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
n
is set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
ReadableStream
andWritableStream
from aDuplex
.
- As a
class DiffieHellman
The
DiffieHellman
class is a utility for creating Diffie-Hellman key exchanges.Instances of the
DiffieHellman
class can be created using the createDiffieHellman function.import assert from 'node:assert'; const { createDiffieHellman, } = await import('node:crypto'); // Generate Alice's keys... const alice = createDiffieHellman(2048); const aliceKey = alice.generateKeys(); // Generate Bob's keys... const bob = createDiffieHellman(alice.getPrime(), alice.getGenerator()); const bobKey = bob.generateKeys(); // Exchange and generate the secret... const aliceSecret = alice.computeSecret(bobKey); const bobSecret = bob.computeSecret(aliceKey); // OK assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('hex'));
- verifyError: number
A bit field containing any warnings and/or errors resulting from a check performed during initialization of the
DiffieHellman
object.The following values are valid for this property (as defined in
node:constants
module):DH_CHECK_P_NOT_SAFE_PRIME
DH_CHECK_P_NOT_PRIME
DH_UNABLE_TO_CHECK_GENERATOR
DH_NOT_SUITABLE_GENERATOR
- otherPublicKey: ArrayBufferView,inputEncoding?: null,outputEncoding?: null
Computes the shared secret using
otherPublicKey
as the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specifiedinputEncoding
, and secret is encoded using specifiedoutputEncoding
. If theinputEncoding
is not provided,otherPublicKey
is expected to be aBuffer
,TypedArray
, orDataView
.If
outputEncoding
is given a string is returned; otherwise, aBuffer
is returned.@param inputEncodingThe
encoding
of anotherPublicKey
string.@param outputEncodingThe
encoding
of the return value.otherPublicKey: string,outputEncoding?: nullComputes the shared secret using
otherPublicKey
as the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specifiedinputEncoding
, and secret is encoded using specifiedoutputEncoding
. If theinputEncoding
is not provided,otherPublicKey
is expected to be aBuffer
,TypedArray
, orDataView
.If
outputEncoding
is given a string is returned; otherwise, aBuffer
is returned.@param inputEncodingThe
encoding
of anotherPublicKey
string.@param outputEncodingThe
encoding
of the return value.otherPublicKey: ArrayBufferView,inputEncoding: null,): string;Computes the shared secret using
otherPublicKey
as the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specifiedinputEncoding
, and secret is encoded using specifiedoutputEncoding
. If theinputEncoding
is not provided,otherPublicKey
is expected to be aBuffer
,TypedArray
, orDataView
.If
outputEncoding
is given a string is returned; otherwise, aBuffer
is returned.@param inputEncodingThe
encoding
of anotherPublicKey
string.@param outputEncodingThe
encoding
of the return value.otherPublicKey: string,): string;Computes the shared secret using
otherPublicKey
as the other party's public key and returns the computed shared secret. The supplied key is interpreted using the specifiedinputEncoding
, and secret is encoded using specifiedoutputEncoding
. If theinputEncoding
is not provided,otherPublicKey
is expected to be aBuffer
,TypedArray
, orDataView
.If
outputEncoding
is given a string is returned; otherwise, aBuffer
is returned.@param inputEncodingThe
encoding
of anotherPublicKey
string.@param outputEncodingThe
encoding
of the return value. Generates private and public Diffie-Hellman key values unless they have been generated or computed already, and returns the public key in the specified
encoding
. This key should be transferred to the other party. Ifencoding
is provided a string is returned; otherwise aBuffer
is returned.This function is a thin wrapper around
DH_generate_key()
. In particular, once a private key has been generated or set, calling this function only updates the public key but does not generate a new private key.): string;Generates private and public Diffie-Hellman key values unless they have been generated or computed already, and returns the public key in the specified
encoding
. This key should be transferred to the other party. Ifencoding
is provided a string is returned; otherwise aBuffer
is returned.This function is a thin wrapper around
DH_generate_key()
. In particular, once a private key has been generated or set, calling this function only updates the public key but does not generate a new private key.@param encodingThe
encoding
of the return value.Returns the Diffie-Hellman generator in the specified
encoding
. Ifencoding
is provided a string is returned; otherwise aBuffer
is returned.): string;Returns the Diffie-Hellman generator in the specified
encoding
. Ifencoding
is provided a string is returned; otherwise aBuffer
is returned.@param encodingThe
encoding
of the return value.- ): string;
Returns the Diffie-Hellman prime in the specified
encoding
. Ifencoding
is provided a string is returned; otherwise aBuffer
is returned.@param encodingThe
encoding
of the return value. Returns the Diffie-Hellman private key in the specified
encoding
. Ifencoding
is provided a string is returned; otherwise aBuffer
is returned.): string;Returns the Diffie-Hellman private key in the specified
encoding
. Ifencoding
is provided a string is returned; otherwise aBuffer
is returned.@param encodingThe
encoding
of the return value.Returns the Diffie-Hellman public key in the specified
encoding
. Ifencoding
is provided a string is returned; otherwise aBuffer
is returned.): string;Returns the Diffie-Hellman public key in the specified
encoding
. Ifencoding
is provided a string is returned; otherwise aBuffer
is returned.@param encodingThe
encoding
of the return value.- privateKey: ArrayBufferView): void;
Sets the Diffie-Hellman private key. If the
encoding
argument is provided,privateKey
is expected to be a string. If noencoding
is provided,privateKey
is expected to be aBuffer
,TypedArray
, orDataView
.This function does not automatically compute the associated public key. Either
diffieHellman.setPublicKey()
ordiffieHellman.generateKeys()
can be used to manually provide the public key or to automatically derive it.privateKey: string,encoding: BufferEncoding): void;Sets the Diffie-Hellman private key. If the
encoding
argument is provided,privateKey
is expected to be a string. If noencoding
is provided,privateKey
is expected to be aBuffer
,TypedArray
, orDataView
.This function does not automatically compute the associated public key. Either
diffieHellman.setPublicKey()
ordiffieHellman.generateKeys()
can be used to manually provide the public key or to automatically derive it.@param encodingThe
encoding
of theprivateKey
string. - publicKey: ArrayBufferView): void;
Sets the Diffie-Hellman public key. If the
encoding
argument is provided,publicKey
is expected to be a string. If noencoding
is provided,publicKey
is expected to be aBuffer
,TypedArray
, orDataView
.publicKey: string,encoding: BufferEncoding): void;Sets the Diffie-Hellman public key. If the
encoding
argument is provided,publicKey
is expected to be a string. If noencoding
is provided,publicKey
is expected to be aBuffer
,TypedArray
, orDataView
.@param encodingThe
encoding
of thepublicKey
string.
class ECDH
The
ECDH
class is a utility for creating Elliptic Curve Diffie-Hellman (ECDH) key exchanges.Instances of the
ECDH
class can be created using the createECDH function.import assert from 'node:assert'; const { createECDH, } = await import('node:crypto'); // Generate Alice's keys... const alice = createECDH('secp521r1'); const aliceKey = alice.generateKeys(); // Generate Bob's keys... const bob = createECDH('secp521r1'); const bobKey = bob.generateKeys(); // Exchange and generate the secret... const aliceSecret = alice.computeSecret(bobKey); const bobSecret = bob.computeSecret(aliceKey); assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('hex')); // OK
- otherPublicKey: ArrayBufferView
Computes the shared secret using
otherPublicKey
as the other party's public key and returns the computed shared secret. The supplied key is interpreted using specifiedinputEncoding
, and the returned secret is encoded using the specifiedoutputEncoding
. If theinputEncoding
is not provided,otherPublicKey
is expected to be aBuffer
,TypedArray
, orDataView
.If
outputEncoding
is given a string will be returned; otherwise aBuffer
is returned.ecdh.computeSecret
will throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY
error whenotherPublicKey
lies outside of the elliptic curve. SinceotherPublicKey
is usually supplied from a remote user over an insecure network, be sure to handle this exception accordingly.otherPublicKey: string,Computes the shared secret using
otherPublicKey
as the other party's public key and returns the computed shared secret. The supplied key is interpreted using specifiedinputEncoding
, and the returned secret is encoded using the specifiedoutputEncoding
. If theinputEncoding
is not provided,otherPublicKey
is expected to be aBuffer
,TypedArray
, orDataView
.If
outputEncoding
is given a string will be returned; otherwise aBuffer
is returned.ecdh.computeSecret
will throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY
error whenotherPublicKey
lies outside of the elliptic curve. SinceotherPublicKey
is usually supplied from a remote user over an insecure network, be sure to handle this exception accordingly.@param inputEncodingThe
encoding
of theotherPublicKey
string.otherPublicKey: ArrayBufferView,): string;Computes the shared secret using
otherPublicKey
as the other party's public key and returns the computed shared secret. The supplied key is interpreted using specifiedinputEncoding
, and the returned secret is encoded using the specifiedoutputEncoding
. If theinputEncoding
is not provided,otherPublicKey
is expected to be aBuffer
,TypedArray
, orDataView
.If
outputEncoding
is given a string will be returned; otherwise aBuffer
is returned.ecdh.computeSecret
will throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY
error whenotherPublicKey
lies outside of the elliptic curve. SinceotherPublicKey
is usually supplied from a remote user over an insecure network, be sure to handle this exception accordingly.@param outputEncodingThe
encoding
of the return value.otherPublicKey: string,): string;Computes the shared secret using
otherPublicKey
as the other party's public key and returns the computed shared secret. The supplied key is interpreted using specifiedinputEncoding
, and the returned secret is encoded using the specifiedoutputEncoding
. If theinputEncoding
is not provided,otherPublicKey
is expected to be aBuffer
,TypedArray
, orDataView
.If
outputEncoding
is given a string will be returned; otherwise aBuffer
is returned.ecdh.computeSecret
will throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY
error whenotherPublicKey
lies outside of the elliptic curve. SinceotherPublicKey
is usually supplied from a remote user over an insecure network, be sure to handle this exception accordingly.@param inputEncodingThe
encoding
of theotherPublicKey
string.@param outputEncodingThe
encoding
of the return value. Generates private and public EC Diffie-Hellman key values, and returns the public key in the specified
format
andencoding
. This key should be transferred to the other party.The
format
argument specifies point encoding and can be'compressed'
or'uncompressed'
. Ifformat
is not specified, the point will be returned in'uncompressed'
format.If
encoding
is provided a string is returned; otherwise aBuffer
is returned.): string;Generates private and public EC Diffie-Hellman key values, and returns the public key in the specified
format
andencoding
. This key should be transferred to the other party.The
format
argument specifies point encoding and can be'compressed'
or'uncompressed'
. Ifformat
is not specified, the point will be returned in'uncompressed'
format.If
encoding
is provided a string is returned; otherwise aBuffer
is returned.@param encodingThe
encoding
of the return value.If
encoding
is specified, a string is returned; otherwise aBuffer
is returned.@returnsThe EC Diffie-Hellman in the specified
encoding
.): string;If
encoding
is specified, a string is returned; otherwise aBuffer
is returned.@param encodingThe
encoding
of the return value.@returnsThe EC Diffie-Hellman in the specified
encoding
.- encoding?: null,
The
format
argument specifies point encoding and can be'compressed'
or'uncompressed'
. Ifformat
is not specified the point will be returned in'uncompressed'
format.If
encoding
is specified, a string is returned; otherwise aBuffer
is returned.@param encodingThe
encoding
of the return value.@returnsThe EC Diffie-Hellman public key in the specified
encoding
andformat
.): string;The
format
argument specifies point encoding and can be'compressed'
or'uncompressed'
. Ifformat
is not specified the point will be returned in'uncompressed'
format.If
encoding
is specified, a string is returned; otherwise aBuffer
is returned.@param encodingThe
encoding
of the return value.@returnsThe EC Diffie-Hellman public key in the specified
encoding
andformat
. - privateKey: ArrayBufferView): void;
Sets the EC Diffie-Hellman private key. If
encoding
is provided,privateKey
is expected to be a string; otherwiseprivateKey
is expected to be aBuffer
,TypedArray
, orDataView
.If
privateKey
is not valid for the curve specified when theECDH
object was created, an error is thrown. Upon setting the private key, the associated public point (key) is also generated and set in theECDH
object.privateKey: string,): void;Sets the EC Diffie-Hellman private key. If
encoding
is provided,privateKey
is expected to be a string; otherwiseprivateKey
is expected to be aBuffer
,TypedArray
, orDataView
.If
privateKey
is not valid for the curve specified when theECDH
object was created, an error is thrown. Upon setting the private key, the associated public point (key) is also generated and set in theECDH
object.@param encodingThe
encoding
of theprivateKey
string. - curve: string,outputEncoding?: 'latin1' | 'base64' | 'base64url' | 'hex',format?: 'uncompressed' | 'compressed' | 'hybrid'
Converts the EC Diffie-Hellman public key specified by
key
andcurve
to the format specified byformat
. Theformat
argument specifies point encoding and can be'compressed'
,'uncompressed'
or'hybrid'
. The supplied key is interpreted using the specifiedinputEncoding
, and the returned key is encoded using the specifiedoutputEncoding
.Use getCurves to obtain a list of available curve names. On recent OpenSSL releases,
openssl ecparam -list_curves
will also display the name and description of each available elliptic curve.If
format
is not specified the point will be returned in'uncompressed'
format.If the
inputEncoding
is not provided,key
is expected to be aBuffer
,TypedArray
, orDataView
.Example (uncompressing a key):
const { createECDH, ECDH, } = await import('node:crypto'); const ecdh = createECDH('secp256k1'); ecdh.generateKeys(); const compressedKey = ecdh.getPublicKey('hex', 'compressed'); const uncompressedKey = ECDH.convertKey(compressedKey, 'secp256k1', 'hex', 'hex', 'uncompressed'); // The converted key and the uncompressed public key should be the same console.log(uncompressedKey === ecdh.getPublicKey('hex'));
@param inputEncodingThe
encoding
of thekey
string.@param outputEncodingThe
encoding
of the return value.
class Hash
The
Hash
class is a utility for creating hash digests of data. It can be used in one of two ways:- As a
stream
that is both readable and writable, where data is written to produce a computed hash digest on the readable side, or - Using the
hash.update()
andhash.digest()
methods to produce the computed hash.
The createHash method is used to create
Hash
instances.Hash
objects are not to be created directly using thenew
keyword.Example: Using
Hash
objects as streams:const { createHash, } = await import('node:crypto'); const hash = createHash('sha256'); hash.on('readable', () => { // Only one element is going to be produced by the // hash stream. const data = hash.read(); if (data) { console.log(data.toString('hex')); // Prints: // 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50 } }); hash.write('some data to hash'); hash.end();
Example: Using
Hash
and piped streams:import { createReadStream } from 'node:fs'; import { stdout } from 'node:process'; const { createHash } = await import('node:crypto'); const hash = createHash('sha256'); const input = createReadStream('test.js'); input.pipe(hash).setEncoding('hex').pipe(stdout);
Example: Using the
hash.update()
andhash.digest()
methods:const { createHash, } = await import('node:crypto'); const hash = createHash('sha256'); hash.update('some data to hash'); console.log(hash.digest('hex')); // Prints: // 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50
- allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. - static captureRejections: boolean
Value: boolean
Change the default
captureRejections
option on all newEventEmitter
objects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')
See how to write a custom
rejection handler
. - static defaultMaxListeners: number
By default, a maximum of
10
listeners can be registered for any single event. This limit can be changed for individualEventEmitter
instances using theemitter.setMaxListeners(n)
method. To change the default for allEventEmitter
instances, theevents.defaultMaxListeners
property can be used. If this value is not a positive number, aRangeError
is thrown.Take caution when setting the
events.defaultMaxListeners
because the change affects allEventEmitter
instances, including those created before the change is made. However, callingemitter.setMaxListeners(n)
still has precedence overevents.defaultMaxListeners
.This is not a hard limit. The
EventEmitter
instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter
, theemitter.getMaxListeners()
andemitter.setMaxListeners()
methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });
The
--trace-warnings
command-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')
and will have the additionalemitter
,type
, andcount
properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsname
property is set to'MaxListenersExceededWarning'
. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'
events. Listeners installed using this symbol are called before the regular'error'
listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'
event is emitted. Therefore, the process will still crash if no regular'error'
listener is installed. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: T | ComposeFnParam | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
- copy(
Creates a new
Hash
object that contains a deep copy of the internal state of the currentHash
object.The optional
options
argument controls stream behavior. For XOF hash functions such as'shake256'
, theoutputLength
option can be used to specify the desired output length in bytes.An error is thrown when an attempt is made to copy the
Hash
object after itshash.digest()
method has been called.// Calculate a rolling hash. const { createHash, } = await import('node:crypto'); const hash = createHash('sha256'); hash.update('one'); console.log(hash.copy().digest('hex')); hash.update('two'); console.log(hash.copy().digest('hex')); hash.update('three'); console.log(hash.copy().digest('hex')); // Etc.
@param optionsstream.transform
options The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event Calculates the digest of all of the data passed to be hashed (using the
hash.update()
method). Ifencoding
is provided a string will be returned; otherwise aBuffer
is returned.The
Hash
object can not be used again afterhash.digest()
method has been called. Multiple calls will cause an error to be thrown.): string;Calculates the digest of all of the data passed to be hashed (using the
hash.update()
method). Ifencoding
is provided a string will be returned; otherwise aBuffer
is returned.The
Hash
object can not be used again afterhash.digest()
method has been called. Multiple calls will cause an error to be thrown.@param encodingThe
encoding
of the return value.- drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. Updates the hash content with the given
data
, the encoding of which is given ininputEncoding
. Ifencoding
is not provided, and thedata
is a string, an encoding of'utf8'
is enforced. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.This can be called many times with new data as it is streamed.
data: string,Updates the hash content with the given
data
, the encoding of which is given ininputEncoding
. Ifencoding
is not provided, and thedata
is a string, an encoding of'utf8'
is enforced. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.This can be called many times with new data as it is streamed.
@param inputEncodingThe
encoding
of thedata
string.- wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
. - ): Disposable;
Listens once to the
abort
event on the providedsignal
.Listening to the
abort
event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation()
. Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignal
s in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagation
does not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }
@returnsDisposable that removes the
abort
listener. - src: string | Object | Stream | ArrayBuffer | Blob | Iterable<any, any, any> | AsyncIterable<any, any, any> | AsyncGeneratorFunction | Promise<any>
A utility method for creating duplex streams.
Stream
converts writable stream into writableDuplex
and readable stream toDuplex
.Blob
converts into readableDuplex
.string
converts into readableDuplex
.ArrayBuffer
converts into readableDuplex
.AsyncIterable
converts into a readableDuplex
. Cannot yieldnull
.AsyncGeneratorFunction
converts into a readable/writable transformDuplex
. Must take a sourceAsyncIterable
as first parameter. Cannot yieldnull
.AsyncFunction
converts into a writableDuplex
. Must return eithernull
orundefined
Object ({ writable, readable })
convertsreadable
andwritable
intoStream
and then combines them intoDuplex
where theDuplex
will write to thewritable
and read from thereadable
.Promise
converts into readableDuplex
. Valuenull
is ignored.
- options?: Pick<DuplexOptions<Duplex>, 'signal' | 'allowHalfOpen' | 'decodeStrings' | 'encoding' | 'highWaterMark' | 'objectMode'>
A utility method for creating a
Duplex
from a webReadableStream
andWritableStream
. - name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.For
EventEmitter
s this behaves exactly the same as calling.listeners
on the emitter.For
EventTarget
s this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] }
- ): number;
Returns the currently set max amount of listeners.
For
EventEmitter
s this behaves exactly the same as calling.getMaxListeners
on the emitter.For
EventTarget
s this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 }
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
eventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
- n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);
@param nA non-negative number. The maximum number of listeners per
EventTarget
event.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
n
is set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
ReadableStream
andWritableStream
from aDuplex
.
- As a
class KeyObject
Node.js uses a
KeyObject
class to represent a symmetric or asymmetric key, and each kind of key exposes different functions. The createSecretKey, createPublicKey and createPrivateKey methods are used to createKeyObject
instances.KeyObject
objects are not to be created directly using thenew
keyword.Most applications should consider using the new
KeyObject
API instead of passing keys as strings orBuffer
s due to improved security features.KeyObject
instances can be passed to other threads viapostMessage()
. The receiver obtains a clonedKeyObject
, and theKeyObject
does not need to be listed in thetransferList
argument.- asymmetricKeyDetails?: AsymmetricKeyDetails
This property exists only on asymmetric keys. Depending on the type of the key, this object contains information about the key. None of the information obtained through this property can be used to uniquely identify a key or to compromise the security of the key.
For RSA-PSS keys, if the key material contains a
RSASSA-PSS-params
sequence, thehashAlgorithm
,mgf1HashAlgorithm
, andsaltLength
properties will be set.Other key details might be exposed via this API using additional attributes.
- asymmetricKeyType?: KeyType
For asymmetric keys, this property represents the type of the key. Supported key types are:
'rsa'
(OID 1.2.840.113549.1.1.1)'rsa-pss'
(OID 1.2.840.113549.1.1.10)'dsa'
(OID 1.2.840.10040.4.1)'ec'
(OID 1.2.840.10045.2.1)'x25519'
(OID 1.3.101.110)'x448'
(OID 1.3.101.111)'ed25519'
(OID 1.3.101.112)'ed448'
(OID 1.3.101.113)'dh'
(OID 1.2.840.113549.1.3.1)
This property is
undefined
for unrecognizedKeyObject
types and symmetric keys. - symmetricKeySize?: number
For secret keys, this property represents the size of the key in bytes. This property is
undefined
for asymmetric keys. - type: KeyObjectType
Depending on the type of this
KeyObject
, this property is either'secret'
for secret (symmetric) keys,'public'
for public (asymmetric) keys or'private'
for private (asymmetric) keys. - ): boolean;
Returns
true
orfalse
depending on whether the keys have exactly the same type, value, and parameters. This method is not constant time.@param otherKeyObjectA
KeyObject
with which to comparekeyObject
. For symmetric keys, the following encoding options can be used:
For public keys, the following encoding options can be used:
For private keys, the following encoding options can be used:
The result type depends on the selected encoding format, when PEM the result is a string, when DER it will be a buffer containing the data encoded as DER, when JWK it will be an object.
When JWK encoding format was selected, all other encoding options are ignored.
PKCS#1, SEC1, and PKCS#8 type keys can be encrypted by using a combination of the
cipher
andformat
options. The PKCS#8type
can be used with anyformat
to encrypt any key algorithm (RSA, EC, or DH) by specifying acipher
. PKCS#1 and SEC1 can only be encrypted by specifying acipher
when the PEMformat
is used. For maximum compatibility, use PKCS#8 for encrypted private keys. Since PKCS#8 defines its own encryption mechanism, PEM-level encryption is not supported when encrypting a PKCS#8 key. See RFC 5208 for PKCS#8 encryption and RFC 1421 for PKCS#1 and SEC1 encryption.For symmetric keys, the following encoding options can be used:
For public keys, the following encoding options can be used:
For private keys, the following encoding options can be used:
The result type depends on the selected encoding format, when PEM the result is a string, when DER it will be a buffer containing the data encoded as DER, when JWK it will be an object.
When JWK encoding format was selected, all other encoding options are ignored.
PKCS#1, SEC1, and PKCS#8 type keys can be encrypted by using a combination of the
cipher
andformat
options. The PKCS#8type
can be used with anyformat
to encrypt any key algorithm (RSA, EC, or DH) by specifying acipher
. PKCS#1 and SEC1 can only be encrypted by specifying acipher
when the PEMformat
is used. For maximum compatibility, use PKCS#8 for encrypted private keys. Since PKCS#8 defines its own encryption mechanism, PEM-level encryption is not supported when encrypting a PKCS#8 key. See RFC 5208 for PKCS#8 encryption and RFC 1421 for PKCS#1 and SEC1 encryption.For symmetric keys, the following encoding options can be used:
For public keys, the following encoding options can be used:
For private keys, the following encoding options can be used:
The result type depends on the selected encoding format, when PEM the result is a string, when DER it will be a buffer containing the data encoded as DER, when JWK it will be an object.
When JWK encoding format was selected, all other encoding options are ignored.
PKCS#1, SEC1, and PKCS#8 type keys can be encrypted by using a combination of the
cipher
andformat
options. The PKCS#8type
can be used with anyformat
to encrypt any key algorithm (RSA, EC, or DH) by specifying acipher
. PKCS#1 and SEC1 can only be encrypted by specifying acipher
when the PEMformat
is used. For maximum compatibility, use PKCS#8 for encrypted private keys. Since PKCS#8 defines its own encryption mechanism, PEM-level encryption is not supported when encrypting a PKCS#8 key. See RFC 5208 for PKCS#8 encryption and RFC 1421 for PKCS#1 and SEC1 encryption.- extractable: boolean,
Converts a
KeyObject
instance to aCryptoKey
. Example: Converting a
CryptoKey
instance to aKeyObject
:const { KeyObject } = await import('node:crypto'); const { subtle } = globalThis.crypto; const key = await subtle.generateKey({ name: 'HMAC', hash: 'SHA-256', length: 256, }, true, ['sign', 'verify']); const keyObject = KeyObject.from(key); console.log(keyObject.symmetricKeySize); // Prints: 32 (symmetric key size in bytes)
class Sign
The
Sign
class is a utility for generating signatures. It can be used in one of two ways:- As a writable
stream
, where data to be signed is written and thesign.sign()
method is used to generate and return the signature, or - Using the
sign.update()
andsign.sign()
methods to produce the signature.
The createSign method is used to create
Sign
instances. The argument is the string name of the hash function to use.Sign
objects are not to be created directly using thenew
keyword.Example: Using
Sign
andVerify
objects as streams:const { generateKeyPairSync, createSign, createVerify, } = await import('node:crypto'); const { privateKey, publicKey } = generateKeyPairSync('ec', { namedCurve: 'sect239k1', }); const sign = createSign('SHA256'); sign.write('some data to sign'); sign.end(); const signature = sign.sign(privateKey, 'hex'); const verify = createVerify('SHA256'); verify.write('some data to sign'); verify.end(); console.log(verify.verify(publicKey, signature, 'hex')); // Prints: true
Example: Using the
sign.update()
andverify.update()
methods:const { generateKeyPairSync, createSign, createVerify, } = await import('node:crypto'); const { privateKey, publicKey } = generateKeyPairSync('rsa', { modulusLength: 2048, }); const sign = createSign('SHA256'); sign.update('some data to sign'); sign.end(); const signature = sign.sign(privateKey); const verify = createVerify('SHA256'); verify.update('some data to sign'); verify.end(); console.log(verify.verify(publicKey, signature)); // Prints: true
- readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. - static captureRejections: boolean
Value: boolean
Change the default
captureRejections
option on all newEventEmitter
objects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')
See how to write a custom
rejection handler
. - static defaultMaxListeners: number
By default, a maximum of
10
listeners can be registered for any single event. This limit can be changed for individualEventEmitter
instances using theemitter.setMaxListeners(n)
method. To change the default for allEventEmitter
instances, theevents.defaultMaxListeners
property can be used. If this value is not a positive number, aRangeError
is thrown.Take caution when setting the
events.defaultMaxListeners
because the change affects allEventEmitter
instances, including those created before the change is made. However, callingemitter.setMaxListeners(n)
still has precedence overevents.defaultMaxListeners
.This is not a hard limit. The
EventEmitter
instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter
, theemitter.getMaxListeners()
andemitter.setMaxListeners()
methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });
The
--trace-warnings
command-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')
and will have the additionalemitter
,type
, andcount
properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsname
property is set to'MaxListenersExceededWarning'
. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'
events. Listeners installed using this symbol are called before the regular'error'
listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'
event is emitted. Therefore, the process will still crash if no regular'error'
listener is installed. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the writable stream has ended and subsequent calls towrite()
orend()
will result in anERR_STREAM_DESTROYED
error. This is a destructive and immediate way to destroy a stream. Previous calls towrite()
may not have drained, and may trigger anERR_STREAM_DESTROYED
error. Useend()
instead of destroy if data should flush before close, or wait for the'drain'
event before destroying the stream.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
writable._destroy()
.@param errorOptional, an error to emit with
'error'
event. - emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.- eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. - encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - sign(
Calculates the signature on all the data passed through using either
sign.update()
orsign.write()
.If
privateKey
is not aKeyObject
, this function behaves as ifprivateKey
had been passed to createPrivateKey. If it is an object, the following additional properties can be passed:If
outputEncoding
is provided a string is returned; otherwise aBuffer
is returned.The
Sign
object can not be again used aftersign.sign()
method has been called. Multiple calls tosign.sign()
will result in an error being thrown.sign(): string;Calculates the signature on all the data passed through using either
sign.update()
orsign.write()
.If
privateKey
is not aKeyObject
, this function behaves as ifprivateKey
had been passed to createPrivateKey. If it is an object, the following additional properties can be passed:If
outputEncoding
is provided a string is returned; otherwise aBuffer
is returned.The
Sign
object can not be again used aftersign.sign()
method has been called. Multiple calls tosign.sign()
will result in an error being thrown. The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- ): this;
Updates the
Sign
content with the givendata
, the encoding of which is given ininputEncoding
. Ifencoding
is not provided, and thedata
is a string, an encoding of'utf8'
is enforced. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.This can be called many times with new data as it is streamed.
data: string,): this;Updates the
Sign
content with the givendata
, the encoding of which is given ininputEncoding
. Ifencoding
is not provided, and thedata
is a string, an encoding of'utf8'
is enforced. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.This can be called many times with new data as it is streamed.
@param inputEncodingThe
encoding
of thedata
string. - chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
. - ): Disposable;
Listens once to the
abort
event on the providedsignal
.Listening to the
abort
event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation()
. Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignal
s in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagation
does not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }
@returnsDisposable that removes the
abort
listener. - options?: Pick<WritableOptions<Writable>, 'signal' | 'decodeStrings' | 'highWaterMark' | 'objectMode'>
A utility method for creating a
Writable
from a webWritableStream
. - name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.For
EventEmitter
s this behaves exactly the same as calling.listeners
on the emitter.For
EventTarget
s this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] }
- ): number;
Returns the currently set max amount of listeners.
For
EventEmitter
s this behaves exactly the same as calling.getMaxListeners
on the emitter.For
EventTarget
s this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 }
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
eventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
- n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);
@param nA non-negative number. The maximum number of listeners per
EventTarget
event.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
n
is set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
WritableStream
from aWritable
.
- As a writable
class Verify
The
Verify
class is a utility for verifying signatures. It can be used in one of two ways:- As a writable
stream
where written data is used to validate against the supplied signature, or - Using the
verify.update()
andverify.verify()
methods to verify the signature.
The createVerify method is used to create
Verify
instances.Verify
objects are not to be created directly using thenew
keyword.See
Sign
for examples.- readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. - static captureRejections: boolean
Value: boolean
Change the default
captureRejections
option on all newEventEmitter
objects. - readonly static captureRejectionSymbol: typeof captureRejectionSymbol
Value:
Symbol.for('nodejs.rejection')
See how to write a custom
rejection handler
. - static defaultMaxListeners: number
By default, a maximum of
10
listeners can be registered for any single event. This limit can be changed for individualEventEmitter
instances using theemitter.setMaxListeners(n)
method. To change the default for allEventEmitter
instances, theevents.defaultMaxListeners
property can be used. If this value is not a positive number, aRangeError
is thrown.Take caution when setting the
events.defaultMaxListeners
because the change affects allEventEmitter
instances, including those created before the change is made. However, callingemitter.setMaxListeners(n)
still has precedence overevents.defaultMaxListeners
.This is not a hard limit. The
EventEmitter
instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter
, theemitter.getMaxListeners()
andemitter.setMaxListeners()
methods can be used to temporarily avoid this warning:import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.setMaxListeners(emitter.getMaxListeners() + 1); emitter.once('event', () => { // do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0)); });
The
--trace-warnings
command-line flag can be used to display the stack trace for such warnings.The emitted warning can be inspected with
process.on('warning')
and will have the additionalemitter
,type
, andcount
properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Itsname
property is set to'MaxListenersExceededWarning'
. - readonly static errorMonitor: typeof errorMonitor
This symbol shall be used to install a listener for only monitoring
'error'
events. Listeners installed using this symbol are called before the regular'error'
listeners are called.Installing a listener using this symbol does not change the behavior once an
'error'
event is emitted. Therefore, the process will still crash if no regular'error'
listener is installed. - event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- drain
- error
- finish
- pipe
- unpipe
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the writable stream has ended and subsequent calls towrite()
orend()
will result in anERR_STREAM_DESTROYED
error. This is a destructive and immediate way to destroy a stream. Previous calls towrite()
may not have drained, and may trigger anERR_STREAM_DESTROYED
error. Useend()
instead of destroy if data should flush before close, or wait for the'drain'
event before destroying the stream.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
writable._destroy()
.@param errorOptional, an error to emit with
'error'
event. - emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.- eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. - encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.Updates the
Verify
content with the givendata
, the encoding of which is given ininputEncoding
. IfinputEncoding
is not provided, and thedata
is a string, an encoding of'utf8'
is enforced. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.This can be called many times with new data as it is streamed.
data: string,Updates the
Verify
content with the givendata
, the encoding of which is given ininputEncoding
. IfinputEncoding
is not provided, and thedata
is a string, an encoding of'utf8'
is enforced. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.This can be called many times with new data as it is streamed.
@param inputEncodingThe
encoding
of thedata
string.- signature: ArrayBufferView): boolean;
Verifies the provided data using the given
object
andsignature
.If
object
is not aKeyObject
, this function behaves as ifobject
had been passed to createPublicKey. If it is an object, the following additional properties can be passed:The
signature
argument is the previously calculated signature for the data, in thesignatureEncoding
. If asignatureEncoding
is specified, thesignature
is expected to be a string; otherwisesignature
is expected to be aBuffer
,TypedArray
, orDataView
.The
verify
object can not be used again afterverify.verify()
has been called. Multiple calls toverify.verify()
will result in an error being thrown.Because public keys can be derived from private keys, a private key may be passed instead of a public key.
signature: string,): boolean;Verifies the provided data using the given
object
andsignature
.If
object
is not aKeyObject
, this function behaves as ifobject
had been passed to createPublicKey. If it is an object, the following additional properties can be passed:The
signature
argument is the previously calculated signature for the data, in thesignatureEncoding
. If asignatureEncoding
is specified, thesignature
is expected to be a string; otherwisesignature
is expected to be aBuffer
,TypedArray
, orDataView
.The
verify
object can not be used again afterverify.verify()
has been called. Multiple calls toverify.verify()
will result in an error being thrown.Because public keys can be derived from private keys, a private key may be passed instead of a public key.
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
. - ): Disposable;
Listens once to the
abort
event on the providedsignal
.Listening to the
abort
event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can calle.stopImmediatePropagation()
. Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.This API allows safely using
AbortSignal
s in Node.js APIs by solving these two issues by listening to the event such thatstopImmediatePropagation
does not prevent the listener from running.Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events'; function example(signal) { let disposable; try { signal.addEventListener('abort', (e) => e.stopImmediatePropagation()); disposable = addAbortListener(signal, (e) => { // Do something when signal is aborted. }); } finally { disposable?.[Symbol.dispose](); } }
@returnsDisposable that removes the
abort
listener. - options?: Pick<WritableOptions<Writable>, 'signal' | 'decodeStrings' | 'highWaterMark' | 'objectMode'>
A utility method for creating a
Writable
from a webWritableStream
. - name: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.For
EventEmitter
s this behaves exactly the same as calling.listeners
on the emitter.For
EventTarget
s this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.import { getEventListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); const listener = () => console.log('Events are fun'); ee.on('foo', listener); console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ] } { const et = new EventTarget(); const listener = () => console.log('Events are fun'); et.addEventListener('foo', listener); console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ] }
- ): number;
Returns the currently set max amount of listeners.
For
EventEmitter
s this behaves exactly the same as calling.getMaxListeners
on the emitter.For
EventTarget
s this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events'; { const ee = new EventEmitter(); console.log(getMaxListeners(ee)); // 10 setMaxListeners(11, ee); console.log(getMaxListeners(ee)); // 11 } { const et = new EventTarget(); console.log(getMaxListeners(et)); // 10 setMaxListeners(11, et); console.log(getMaxListeners(et)); // 11 }
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;
import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
eventName: string,options?: StaticEventEmitterIteratorOptions): AsyncIterator<any[]>;import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo')) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here
Returns an
AsyncIterator
that iterateseventName
events. It will throw if theEventEmitter
emits'error'
. It removes all listeners when exiting the loop. Thevalue
returned by each iteration is an array composed of the emitted event arguments.An
AbortSignal
can be used to cancel waiting on events:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ac = new AbortController(); (async () => { const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); }); for await (const event of on(ee, 'foo', { signal: ac.signal })) { // The execution of this inner block is synchronous and it // processes one event at a time (even with await). Do not use // if concurrent execution is required. console.log(event); // prints ['bar'] [42] } // Unreachable here })(); process.nextTick(() => ac.abort());
Use the
close
option to specify an array of event names that will end the iteration:import { on, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); // Emit later on process.nextTick(() => { ee.emit('foo', 'bar'); ee.emit('foo', 42); ee.emit('close'); }); for await (const event of on(ee, 'foo', { close: ['close'] })) { console.log(event); // prints ['bar'] [42] } // the loop will exit after 'close' is emitted console.log('done'); // prints 'done'
@returnsAn
AsyncIterator
that iterateseventName
events emitted by theemitter
- emitter: EventEmitter,eventName: string | symbol,options?: StaticEventEmitterOptions): Promise<any[]>;
Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
eventName: string,options?: StaticEventEmitterOptions): Promise<any[]>;Creates a
Promise
that is fulfilled when theEventEmitter
emits the given event or that is rejected if theEventEmitter
emits'error'
while waiting. ThePromise
will resolve with an array of all the arguments emitted to the given event.This method is intentionally generic and works with the web platform EventTarget interface, which has no special
'error'
event semantics and does not listen to the'error'
event.import { once, EventEmitter } from 'node:events'; import process from 'node:process'; const ee = new EventEmitter(); process.nextTick(() => { ee.emit('myevent', 42); }); const [value] = await once(ee, 'myevent'); console.log(value); const err = new Error('kaboom'); process.nextTick(() => { ee.emit('error', err); }); try { await once(ee, 'myevent'); } catch (err) { console.error('error happened', err); }
The special handling of the
'error'
event is only used whenevents.once()
is used to wait for another event. Ifevents.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); once(ee, 'error') .then(([err]) => console.log('ok', err.message)) .catch((err) => console.error('error', err.message)); ee.emit('error', new Error('boom')); // Prints: ok boom
An
AbortSignal
can be used to cancel waiting for the event:import { EventEmitter, once } from 'node:events'; const ee = new EventEmitter(); const ac = new AbortController(); async function foo(emitter, event, signal) { try { await once(emitter, event, { signal }); console.log('event emitted!'); } catch (error) { if (error.name === 'AbortError') { console.error('Waiting for the event was canceled!'); } else { console.error('There was an error', error.message); } } } foo(ee, 'foo', ac.signal); ac.abort(); // Abort waiting for the event ee.emit('foo'); // Prints: Waiting for the event was canceled!
- n?: number,): void;
import { setMaxListeners, EventEmitter } from 'node:events'; const target = new EventTarget(); const emitter = new EventEmitter(); setMaxListeners(5, target, emitter);
@param nA non-negative number. The maximum number of listeners per
EventTarget
event.@param eventTargetsZero or more {EventTarget} or {EventEmitter} instances. If none are specified,
n
is set as the default max for all newly created {EventTarget} and {EventEmitter} objects. A utility method for creating a web
WritableStream
from aWritable
.
- As a writable
class X509Certificate
Encapsulates an X509 certificate and provides read-only access to its information.
const { X509Certificate } = await import('node:crypto'); const x509 = new X509Certificate('{... pem encoded cert ...}'); console.log(x509.subject);
- readonly fingerprint: string
The SHA-1 fingerprint of this certificate.
Because SHA-1 is cryptographically broken and because the security of SHA-1 is significantly worse than that of algorithms that are commonly used to sign certificates, consider using
x509.fingerprint256
instead. - readonly fingerprint512: string
The SHA-512 fingerprint of this certificate.
Because computing the SHA-256 fingerprint is usually faster and because it is only half the size of the SHA-512 fingerprint,
x509.fingerprint256
may be a better choice. While SHA-512 presumably provides a higher level of security in general, the security of SHA-256 matches that of most algorithms that are commonly used to sign certificates. - readonly infoAccess: undefined | string
A textual representation of the certificate's authority information access extension.
This is a line feed separated list of access descriptions. Each line begins with the access method and the kind of the access location, followed by a colon and the value associated with the access location.
After the prefix denoting the access method and the kind of the access location, the remainder of each line might be enclosed in quotes to indicate that the value is a JSON string literal. For backward compatibility, Node.js only uses JSON string literals within this property when necessary to avoid ambiguity. Third-party code should be prepared to handle both possible entry formats.
- readonly issuerCertificate?: X509Certificate
The issuer certificate or
undefined
if the issuer certificate is not available. - readonly serialNumber: string
The serial number of this certificate.
Serial numbers are assigned by certificate authorities and do not uniquely identify certificates. Consider using
x509.fingerprint256
as a unique identifier instead. - readonly subjectAltName: undefined | string
The subject alternative name specified for this certificate.
This is a comma-separated list of subject alternative names. Each entry begins with a string identifying the kind of the subject alternative name followed by a colon and the value associated with the entry.
Earlier versions of Node.js incorrectly assumed that it is safe to split this property at the two-character sequence
', '
(see CVE-2021-44532). However, both malicious and legitimate certificates can contain subject alternative names that include this sequence when represented as a string.After the prefix denoting the type of the entry, the remainder of each entry might be enclosed in quotes to indicate that the value is a JSON string literal. For backward compatibility, Node.js only uses JSON string literals within this property when necessary to avoid ambiguity. Third-party code should be prepared to handle both possible entry formats.
- readonly validFromDate: Date
The date/time from which this certificate is valid, encapsulated in a
Date
object. - readonly validToDate: Date
The date/time until which this certificate is valid, encapsulated in a
Date
object. - email: string,): undefined | string;
Checks whether the certificate matches the given email address.
If the
'subject'
option is undefined or set to'default'
, the certificate subject is only considered if the subject alternative name extension either does not exist or does not contain any email addresses.If the
'subject'
option is set to'always'
and if the subject alternative name extension either does not exist or does not contain a matching email address, the certificate subject is considered.If the
'subject'
option is set to'never'
, the certificate subject is never considered, even if the certificate contains no subject alternative names.@returnsReturns
email
if the certificate matches,undefined
if it does not. - name: string,): undefined | string;
Checks whether the certificate matches the given host name.
If the certificate matches the given host name, the matching subject name is returned. The returned name might be an exact match (e.g.,
foo.example.com
) or it might contain wildcards (e.g.,*.example.com
). Because host name comparisons are case-insensitive, the returned subject name might also differ from the givenname
in capitalization.If the
'subject'
option is undefined or set to'default'
, the certificate subject is only considered if the subject alternative name extension either does not exist or does not contain any DNS names. This behavior is consistent with RFC 2818 ("HTTP Over TLS").If the
'subject'
option is set to'always'
and if the subject alternative name extension either does not exist or does not contain a matching DNS name, the certificate subject is considered.If the
'subject'
option is set to'never'
, the certificate subject is never considered, even if the certificate contains no subject alternative names.@returnsReturns a subject name that matches
name
, orundefined
if no subject name matchesname
. - ip: string): undefined | string;
Checks whether the certificate matches the given IP address (IPv4 or IPv6).
Only RFC 5280
iPAddress
subject alternative names are considered, and they must match the givenip
address exactly. Other subject alternative names as well as the subject field of the certificate are ignored.@returnsReturns
ip
if the certificate matches,undefined
if it does not. - ): boolean;
Checks whether this certificate was issued by the given
otherCert
. - ): boolean;
Checks whether the public key for this certificate is consistent with the given private key.
@param privateKeyA private key.
There is no standard JSON encoding for X509 certificates. The
toJSON()
method returns a string containing the PEM encoded certificate.Returns information about this certificate using the legacy
certificate object
encoding.Returns the PEM-encoded certificate.
The
DiffieHellmanGroup
class takes a well-known modp group as its argument. It works the same asDiffieHellman
, except that it does not allow changing its keys after creation. In other words, it does not implementsetPublicKey()
orsetPrivateKey()
methods.const { createDiffieHellmanGroup } = await import('node:crypto'); const dh = createDiffieHellmanGroup('modp1');
The name (e.g.
'modp1'
) is taken from RFC 2412 (modp1 and 2) and RFC 3526:perl -ne 'print "$1\n" if /"(modp\d+)"/' src/node_crypto_groups.h
modp1 # 768 bits modp2 # 1024 bits modp5 # 1536 bits modp14 # 2048 bits modp15 # etc. modp16 modp17 modp18
A convenient alias for
crypto.webcrypto.subtle
.An implementation of the Web Crypto API standard.
See the Web Crypto API documentation for details.
- ): void;
Checks the primality of the
candidate
.): void;Checks the primality of the
candidate
. - ): boolean;
Checks the primality of the
candidate
.@param candidateA possible prime encoded as a sequence of big endian octets of arbitrary length.
@returnstrue
if the candidate is a prime with an error probability less than0.25 ** options.checks
. Creates and returns a
Cipher
object, with the givenalgorithm
,key
and initialization vector (iv
).The
options
argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm'
) is used. In that case, theauthTagLength
option is required and specifies the length of the authentication tag in bytes, seeCCM mode
. In GCM mode, theauthTagLength
option is not required but can be used to set the length of the authentication tag that will be returned bygetAuthTag()
and defaults to 16 bytes. Forchacha20-poly1305
, theauthTagLength
option defaults to 16 bytes.The
algorithm
is dependent on OpenSSL, examples are'aes192'
, etc. On recent OpenSSL releases,openssl list -cipher-algorithms
will display the available cipher algorithms.The
key
is the raw key used by thealgorithm
andiv
is an initialization vector. Both arguments must be'utf8'
encoded strings,Buffers
,TypedArray
, orDataView
s. Thekey
may optionally be aKeyObject
of typesecret
. If the cipher does not need an initialization vector,iv
may benull
.When passing strings for
key
oriv
, please considercaveats when using strings as inputs to cryptographic APIs
.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transform
optionsCreates and returns a
Cipher
object, with the givenalgorithm
,key
and initialization vector (iv
).The
options
argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm'
) is used. In that case, theauthTagLength
option is required and specifies the length of the authentication tag in bytes, seeCCM mode
. In GCM mode, theauthTagLength
option is not required but can be used to set the length of the authentication tag that will be returned bygetAuthTag()
and defaults to 16 bytes. Forchacha20-poly1305
, theauthTagLength
option defaults to 16 bytes.The
algorithm
is dependent on OpenSSL, examples are'aes192'
, etc. On recent OpenSSL releases,openssl list -cipher-algorithms
will display the available cipher algorithms.The
key
is the raw key used by thealgorithm
andiv
is an initialization vector. Both arguments must be'utf8'
encoded strings,Buffers
,TypedArray
, orDataView
s. Thekey
may optionally be aKeyObject
of typesecret
. If the cipher does not need an initialization vector,iv
may benull
.When passing strings for
key
oriv
, please considercaveats when using strings as inputs to cryptographic APIs
.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transform
optionsCreates and returns a
Cipher
object, with the givenalgorithm
,key
and initialization vector (iv
).The
options
argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm'
) is used. In that case, theauthTagLength
option is required and specifies the length of the authentication tag in bytes, seeCCM mode
. In GCM mode, theauthTagLength
option is not required but can be used to set the length of the authentication tag that will be returned bygetAuthTag()
and defaults to 16 bytes. Forchacha20-poly1305
, theauthTagLength
option defaults to 16 bytes.The
algorithm
is dependent on OpenSSL, examples are'aes192'
, etc. On recent OpenSSL releases,openssl list -cipher-algorithms
will display the available cipher algorithms.The
key
is the raw key used by thealgorithm
andiv
is an initialization vector. Both arguments must be'utf8'
encoded strings,Buffers
,TypedArray
, orDataView
s. Thekey
may optionally be aKeyObject
of typesecret
. If the cipher does not need an initialization vector,iv
may benull
.When passing strings for
key
oriv
, please considercaveats when using strings as inputs to cryptographic APIs
.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transform
optionsalgorithm: 'chacha20-poly1305',Creates and returns a
Cipher
object, with the givenalgorithm
,key
and initialization vector (iv
).The
options
argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm'
) is used. In that case, theauthTagLength
option is required and specifies the length of the authentication tag in bytes, seeCCM mode
. In GCM mode, theauthTagLength
option is not required but can be used to set the length of the authentication tag that will be returned bygetAuthTag()
and defaults to 16 bytes. Forchacha20-poly1305
, theauthTagLength
option defaults to 16 bytes.The
algorithm
is dependent on OpenSSL, examples are'aes192'
, etc. On recent OpenSSL releases,openssl list -cipher-algorithms
will display the available cipher algorithms.The
key
is the raw key used by thealgorithm
andiv
is an initialization vector. Both arguments must be'utf8'
encoded strings,Buffers
,TypedArray
, orDataView
s. Thekey
may optionally be aKeyObject
of typesecret
. If the cipher does not need an initialization vector,iv
may benull
.When passing strings for
key
oriv
, please considercaveats when using strings as inputs to cryptographic APIs
.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transform
optionsalgorithm: string,Creates and returns a
Cipher
object, with the givenalgorithm
,key
and initialization vector (iv
).The
options
argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm'
) is used. In that case, theauthTagLength
option is required and specifies the length of the authentication tag in bytes, seeCCM mode
. In GCM mode, theauthTagLength
option is not required but can be used to set the length of the authentication tag that will be returned bygetAuthTag()
and defaults to 16 bytes. Forchacha20-poly1305
, theauthTagLength
option defaults to 16 bytes.The
algorithm
is dependent on OpenSSL, examples are'aes192'
, etc. On recent OpenSSL releases,openssl list -cipher-algorithms
will display the available cipher algorithms.The
key
is the raw key used by thealgorithm
andiv
is an initialization vector. Both arguments must be'utf8'
encoded strings,Buffers
,TypedArray
, orDataView
s. Thekey
may optionally be aKeyObject
of typesecret
. If the cipher does not need an initialization vector,iv
may benull
.When passing strings for
key
oriv
, please considercaveats when using strings as inputs to cryptographic APIs
.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transform
optionsCreates and returns a
Decipher
object that uses the givenalgorithm
,key
and initialization vector (iv
).The
options
argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm'
) is used. In that case, theauthTagLength
option is required and specifies the length of the authentication tag in bytes, seeCCM mode
. In GCM mode, theauthTagLength
option is not required but can be used to restrict accepted authentication tags to those with the specified length. Forchacha20-poly1305
, theauthTagLength
option defaults to 16 bytes.The
algorithm
is dependent on OpenSSL, examples are'aes192'
, etc. On recent OpenSSL releases,openssl list -cipher-algorithms
will display the available cipher algorithms.The
key
is the raw key used by thealgorithm
andiv
is an initialization vector. Both arguments must be'utf8'
encoded strings,Buffers
,TypedArray
, orDataView
s. Thekey
may optionally be aKeyObject
of typesecret
. If the cipher does not need an initialization vector,iv
may benull
.When passing strings for
key
oriv
, please considercaveats when using strings as inputs to cryptographic APIs
.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transform
optionsCreates and returns a
Decipher
object that uses the givenalgorithm
,key
and initialization vector (iv
).The
options
argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm'
) is used. In that case, theauthTagLength
option is required and specifies the length of the authentication tag in bytes, seeCCM mode
. In GCM mode, theauthTagLength
option is not required but can be used to restrict accepted authentication tags to those with the specified length. Forchacha20-poly1305
, theauthTagLength
option defaults to 16 bytes.The
algorithm
is dependent on OpenSSL, examples are'aes192'
, etc. On recent OpenSSL releases,openssl list -cipher-algorithms
will display the available cipher algorithms.The
key
is the raw key used by thealgorithm
andiv
is an initialization vector. Both arguments must be'utf8'
encoded strings,Buffers
,TypedArray
, orDataView
s. Thekey
may optionally be aKeyObject
of typesecret
. If the cipher does not need an initialization vector,iv
may benull
.When passing strings for
key
oriv
, please considercaveats when using strings as inputs to cryptographic APIs
.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transform
optionsCreates and returns a
Decipher
object that uses the givenalgorithm
,key
and initialization vector (iv
).The
options
argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm'
) is used. In that case, theauthTagLength
option is required and specifies the length of the authentication tag in bytes, seeCCM mode
. In GCM mode, theauthTagLength
option is not required but can be used to restrict accepted authentication tags to those with the specified length. Forchacha20-poly1305
, theauthTagLength
option defaults to 16 bytes.The
algorithm
is dependent on OpenSSL, examples are'aes192'
, etc. On recent OpenSSL releases,openssl list -cipher-algorithms
will display the available cipher algorithms.The
key
is the raw key used by thealgorithm
andiv
is an initialization vector. Both arguments must be'utf8'
encoded strings,Buffers
,TypedArray
, orDataView
s. Thekey
may optionally be aKeyObject
of typesecret
. If the cipher does not need an initialization vector,iv
may benull
.When passing strings for
key
oriv
, please considercaveats when using strings as inputs to cryptographic APIs
.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transform
optionsalgorithm: 'chacha20-poly1305',Creates and returns a
Decipher
object that uses the givenalgorithm
,key
and initialization vector (iv
).The
options
argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm'
) is used. In that case, theauthTagLength
option is required and specifies the length of the authentication tag in bytes, seeCCM mode
. In GCM mode, theauthTagLength
option is not required but can be used to restrict accepted authentication tags to those with the specified length. Forchacha20-poly1305
, theauthTagLength
option defaults to 16 bytes.The
algorithm
is dependent on OpenSSL, examples are'aes192'
, etc. On recent OpenSSL releases,openssl list -cipher-algorithms
will display the available cipher algorithms.The
key
is the raw key used by thealgorithm
andiv
is an initialization vector. Both arguments must be'utf8'
encoded strings,Buffers
,TypedArray
, orDataView
s. Thekey
may optionally be aKeyObject
of typesecret
. If the cipher does not need an initialization vector,iv
may benull
.When passing strings for
key
oriv
, please considercaveats when using strings as inputs to cryptographic APIs
.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transform
optionsalgorithm: string,Creates and returns a
Decipher
object that uses the givenalgorithm
,key
and initialization vector (iv
).The
options
argument controls stream behavior and is optional except when a cipher in CCM or OCB mode (e.g.'aes-128-ccm'
) is used. In that case, theauthTagLength
option is required and specifies the length of the authentication tag in bytes, seeCCM mode
. In GCM mode, theauthTagLength
option is not required but can be used to restrict accepted authentication tags to those with the specified length. Forchacha20-poly1305
, theauthTagLength
option defaults to 16 bytes.The
algorithm
is dependent on OpenSSL, examples are'aes192'
, etc. On recent OpenSSL releases,openssl list -cipher-algorithms
will display the available cipher algorithms.The
key
is the raw key used by thealgorithm
andiv
is an initialization vector. Both arguments must be'utf8'
encoded strings,Buffers
,TypedArray
, orDataView
s. Thekey
may optionally be aKeyObject
of typesecret
. If the cipher does not need an initialization vector,iv
may benull
.When passing strings for
key
oriv
, please considercaveats when using strings as inputs to cryptographic APIs
.Initialization vectors should be unpredictable and unique; ideally, they will be cryptographically random. They do not have to be secret: IVs are typically just added to ciphertext messages unencrypted. It may sound contradictory that something has to be unpredictable and unique, but does not have to be secret; remember that an attacker must not be able to predict ahead of time what a given IV will be.
@param optionsstream.transform
options- primeLength: number,generator?: number
Creates a
DiffieHellman
key exchange object using the suppliedprime
and an optional specificgenerator
.The
generator
argument can be a number, string, orBuffer
. Ifgenerator
is not specified, the value2
is used.If
primeEncoding
is specified,prime
is expected to be a string; otherwise aBuffer
,TypedArray
, orDataView
is expected.If
generatorEncoding
is specified,generator
is expected to be a string; otherwise a number,Buffer
,TypedArray
, orDataView
is expected.Creates a
DiffieHellman
key exchange object using the suppliedprime
and an optional specificgenerator
.The
generator
argument can be a number, string, orBuffer
. Ifgenerator
is not specified, the value2
is used.If
primeEncoding
is specified,prime
is expected to be a string; otherwise aBuffer
,TypedArray
, orDataView
is expected.If
generatorEncoding
is specified,generator
is expected to be a string; otherwise a number,Buffer
,TypedArray
, orDataView
is expected.generator: string,Creates a
DiffieHellman
key exchange object using the suppliedprime
and an optional specificgenerator
.The
generator
argument can be a number, string, orBuffer
. Ifgenerator
is not specified, the value2
is used.If
primeEncoding
is specified,prime
is expected to be a string; otherwise aBuffer
,TypedArray
, orDataView
is expected.If
generatorEncoding
is specified,generator
is expected to be a string; otherwise a number,Buffer
,TypedArray
, orDataView
is expected.@param generatorEncodingThe
encoding
of thegenerator
string.prime: string,Creates a
DiffieHellman
key exchange object using the suppliedprime
and an optional specificgenerator
.The
generator
argument can be a number, string, orBuffer
. Ifgenerator
is not specified, the value2
is used.If
primeEncoding
is specified,prime
is expected to be a string; otherwise aBuffer
,TypedArray
, orDataView
is expected.If
generatorEncoding
is specified,generator
is expected to be a string; otherwise a number,Buffer
,TypedArray
, orDataView
is expected.@param primeEncodingThe
encoding
of theprime
string.prime: string,generator: string,Creates a
DiffieHellman
key exchange object using the suppliedprime
and an optional specificgenerator
.The
generator
argument can be a number, string, orBuffer
. Ifgenerator
is not specified, the value2
is used.If
primeEncoding
is specified,prime
is expected to be a string; otherwise aBuffer
,TypedArray
, orDataView
is expected.If
generatorEncoding
is specified,generator
is expected to be a string; otherwise a number,Buffer
,TypedArray
, orDataView
is expected.@param primeEncodingThe
encoding
of theprime
string.@param generatorEncodingThe
encoding
of thegenerator
string. - curveName: string
Creates an Elliptic Curve Diffie-Hellman (
ECDH
) key exchange object using a predefined curve specified by thecurveName
string. Use getCurves to obtain a list of available curve names. On recent OpenSSL releases,openssl ecparam -list_curves
will also display the name and description of each available elliptic curve. - algorithm: string,
Creates and returns a
Hash
object that can be used to generate hash digests using the givenalgorithm
. Optionaloptions
argument controls stream behavior. For XOF hash functions such as'shake256'
, theoutputLength
option can be used to specify the desired output length in bytes.The
algorithm
is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are'sha256'
,'sha512'
, etc. On recent releases of OpenSSL,openssl list -digest-algorithms
will display the available digest algorithms.Example: generating the sha256 sum of a file
import { createReadStream, } from 'node:fs'; import { argv } from 'node:process'; const { createHash, } = await import('node:crypto'); const filename = argv[2]; const hash = createHash('sha256'); const input = createReadStream(filename); input.on('readable', () => { // Only one element is going to be produced by the // hash stream. const data = input.read(); if (data) hash.update(data); else { console.log(`${hash.digest('hex')} ${filename}`); } });
@param optionsstream.transform
options - algorithm: string,): Hmac;
Creates and returns an
Hmac
object that uses the givenalgorithm
andkey
. Optionaloptions
argument controls stream behavior.The
algorithm
is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are'sha256'
,'sha512'
, etc. On recent releases of OpenSSL,openssl list -digest-algorithms
will display the available digest algorithms.The
key
is the HMAC key used to generate the cryptographic HMAC hash. If it is aKeyObject
, its type must besecret
. If it is a string, please considercaveats when using strings as inputs to cryptographic APIs
. If it was obtained from a cryptographically secure source of entropy, such as randomBytes or generateKey, its length should not exceed the block size ofalgorithm
(e.g., 512 bits for SHA-256).Example: generating the sha256 HMAC of a file
import { createReadStream, } from 'node:fs'; import { argv } from 'node:process'; const { createHmac, } = await import('node:crypto'); const filename = argv[2]; const hmac = createHmac('sha256', 'a secret'); const input = createReadStream(filename); input.on('readable', () => { // Only one element is going to be produced by the // hash stream. const data = input.read(); if (data) hmac.update(data); else { console.log(`${hmac.digest('hex')} ${filename}`); } });
@param optionsstream.transform
options Creates and returns a new key object containing a private key. If
key
is a string orBuffer
,format
is assumed to be'pem'
; otherwise,key
must be an object with the properties described above.If the private key is encrypted, a
passphrase
must be specified. The length of the passphrase is limited to 1024 bytes.Creates and returns a new key object containing a public key. If
key
is a string orBuffer
,format
is assumed to be'pem'
; ifkey
is aKeyObject
with type'private'
, the public key is derived from the given private key; otherwise,key
must be an object with the properties described above.If the format is
'pem'
, the'key'
may also be an X.509 certificate.Because public keys can be derived from private keys, a private key may be passed instead of a public key. In that case, this function behaves as if createPrivateKey had been called, except that the type of the returned
KeyObject
will be'public'
and that the private key cannot be extracted from the returnedKeyObject
. Similarly, if aKeyObject
with type'private'
is given, a newKeyObject
with type'public'
will be returned and it will be impossible to extract the private key from the returned object.- key: ArrayBufferView
Creates and returns a new key object containing a secret key for symmetric encryption or
Hmac
.key: string,encoding: BufferEncodingCreates and returns a new key object containing a secret key for symmetric encryption or
Hmac
.@param encodingThe string encoding when
key
is a string. - algorithm: string,
Creates and returns a
Sign
object that uses the givenalgorithm
. Use getHashes to obtain the names of the available digest algorithms. Optionaloptions
argument controls thestream.Writable
behavior.In some cases, a
Sign
instance can be created using the name of a signature algorithm, such as'RSA-SHA256'
, instead of a digest algorithm. This will use the corresponding digest algorithm. This does not work for all signature algorithms, such as'ecdsa-with-SHA256'
, so it is best to always use digest algorithm names.@param optionsstream.Writable
options - algorithm: string,
Creates and returns a
Verify
object that uses the given algorithm. Use getHashes to obtain an array of names of the available signing algorithms. Optionaloptions
argument controls thestream.Writable
behavior.In some cases, a
Verify
instance can be created using the name of a signature algorithm, such as'RSA-SHA256'
, instead of a digest algorithm. This will use the corresponding digest algorithm. This does not work for all signature algorithms, such as'ecdsa-with-SHA256'
, so it is best to always use digest algorithm names.@param optionsstream.Writable
options Computes the Diffie-Hellman secret based on a
privateKey
and apublicKey
. Both keys must have the sameasymmetricKeyType
, which must be one of'dh'
(for Diffie-Hellman),'ec'
(for ECDH),'x448'
, or'x25519'
(for ECDH-ES).- type: 'hmac' | 'aes',options: { length: number },): void;
Asynchronously generates a new random secret key of the given
length
. Thetype
will determine which validations will be performed on thelength
.const { generateKey, } = await import('node:crypto'); generateKey('hmac', { length: 512 }, (err, key) => { if (err) throw err; console.log(key.export().toString('hex')); // 46e..........620 });
The size of a generated HMAC key should not exceed the block size of the underlying hash function. See createHmac for more information.
@param typeThe intended use of the generated secret key. Currently accepted values are
'hmac'
and'aes'
. - type: 'rsa',): void;
Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
. - type: 'rsa',
Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.When encoding public keys, it is recommended to use
'spki'
. When encoding private keys, it is recommended to use'pkcs8'
with a strong passphrase, and to keep the passphrase confidential.const { generateKeyPairSync, } = await import('node:crypto'); const { publicKey, privateKey, } = generateKeyPairSync('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, });
The return value
{ publicKey, privateKey }
represents the generated key pair. When PEM encoding was selected, the respective key will be a string, otherwise it will be a buffer containing the data encoded as DER.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
. - type: 'hmac' | 'aes',options: { length: number }
Synchronously generates a new random secret key of the given
length
. Thetype
will determine which validations will be performed on thelength
.const { generateKeySync, } = await import('node:crypto'); const key = generateKeySync('hmac', { length: 512 }); console.log(key.export().toString('hex')); // e89..........41e
The size of a generated HMAC key should not exceed the block size of the underlying hash function. See createHmac for more information.
@param typeThe intended use of the generated secret key. Currently accepted values are
'hmac'
and'aes'
. - size: number,): void;
Generates a pseudorandom prime of
size
bits.If
options.safe
istrue
, the prime will be a safe prime -- that is,(prime - 1) / 2
will also be a prime.The
options.add
andoptions.rem
parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.add
andoptions.rem
are both set, the prime will satisfy the condition thatprime % add = rem
. - If only
options.add
is set andoptions.safe
is nottrue
, the prime will satisfy the condition thatprime % add = 1
. - If only
options.add
is set andoptions.safe
is set totrue
, the prime will instead satisfy the condition thatprime % add = 3
. This is necessary becauseprime % add = 1
foroptions.add > 2
would contradict the condition enforced byoptions.safe
. options.rem
is ignored ifoptions.add
is not given.
Both
options.add
andoptions.rem
must be encoded as big-endian sequences if given as anArrayBuffer
,SharedArrayBuffer
,TypedArray
,Buffer
, orDataView
.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigint
option istrue
, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
size: number,): void;Generates a pseudorandom prime of
size
bits.If
options.safe
istrue
, the prime will be a safe prime -- that is,(prime - 1) / 2
will also be a prime.The
options.add
andoptions.rem
parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.add
andoptions.rem
are both set, the prime will satisfy the condition thatprime % add = rem
. - If only
options.add
is set andoptions.safe
is nottrue
, the prime will satisfy the condition thatprime % add = 1
. - If only
options.add
is set andoptions.safe
is set totrue
, the prime will instead satisfy the condition thatprime % add = 3
. This is necessary becauseprime % add = 1
foroptions.add > 2
would contradict the condition enforced byoptions.safe
. options.rem
is ignored ifoptions.add
is not given.
Both
options.add
andoptions.rem
must be encoded as big-endian sequences if given as anArrayBuffer
,SharedArrayBuffer
,TypedArray
,Buffer
, orDataView
.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigint
option istrue
, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
size: number,): void;Generates a pseudorandom prime of
size
bits.If
options.safe
istrue
, the prime will be a safe prime -- that is,(prime - 1) / 2
will also be a prime.The
options.add
andoptions.rem
parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.add
andoptions.rem
are both set, the prime will satisfy the condition thatprime % add = rem
. - If only
options.add
is set andoptions.safe
is nottrue
, the prime will satisfy the condition thatprime % add = 1
. - If only
options.add
is set andoptions.safe
is set totrue
, the prime will instead satisfy the condition thatprime % add = 3
. This is necessary becauseprime % add = 1
foroptions.add > 2
would contradict the condition enforced byoptions.safe
. options.rem
is ignored ifoptions.add
is not given.
Both
options.add
andoptions.rem
must be encoded as big-endian sequences if given as anArrayBuffer
,SharedArrayBuffer
,TypedArray
,Buffer
, orDataView
.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigint
option istrue
, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
size: number,): void;Generates a pseudorandom prime of
size
bits.If
options.safe
istrue
, the prime will be a safe prime -- that is,(prime - 1) / 2
will also be a prime.The
options.add
andoptions.rem
parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.add
andoptions.rem
are both set, the prime will satisfy the condition thatprime % add = rem
. - If only
options.add
is set andoptions.safe
is nottrue
, the prime will satisfy the condition thatprime % add = 1
. - If only
options.add
is set andoptions.safe
is set totrue
, the prime will instead satisfy the condition thatprime % add = 3
. This is necessary becauseprime % add = 1
foroptions.add > 2
would contradict the condition enforced byoptions.safe
. options.rem
is ignored ifoptions.add
is not given.
Both
options.add
andoptions.rem
must be encoded as big-endian sequences if given as anArrayBuffer
,SharedArrayBuffer
,TypedArray
,Buffer
, orDataView
.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigint
option istrue
, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
- If
- size: number
Generates a pseudorandom prime of
size
bits.If
options.safe
istrue
, the prime will be a safe prime -- that is,(prime - 1) / 2
will also be a prime.The
options.add
andoptions.rem
parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.add
andoptions.rem
are both set, the prime will satisfy the condition thatprime % add = rem
. - If only
options.add
is set andoptions.safe
is nottrue
, the prime will satisfy the condition thatprime % add = 1
. - If only
options.add
is set andoptions.safe
is set totrue
, the prime will instead satisfy the condition thatprime % add = 3
. This is necessary becauseprime % add = 1
foroptions.add > 2
would contradict the condition enforced byoptions.safe
. options.rem
is ignored ifoptions.add
is not given.
Both
options.add
andoptions.rem
must be encoded as big-endian sequences if given as anArrayBuffer
,SharedArrayBuffer
,TypedArray
,Buffer
, orDataView
.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigint
option istrue
, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
size: number,): bigint;Generates a pseudorandom prime of
size
bits.If
options.safe
istrue
, the prime will be a safe prime -- that is,(prime - 1) / 2
will also be a prime.The
options.add
andoptions.rem
parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.add
andoptions.rem
are both set, the prime will satisfy the condition thatprime % add = rem
. - If only
options.add
is set andoptions.safe
is nottrue
, the prime will satisfy the condition thatprime % add = 1
. - If only
options.add
is set andoptions.safe
is set totrue
, the prime will instead satisfy the condition thatprime % add = 3
. This is necessary becauseprime % add = 1
foroptions.add > 2
would contradict the condition enforced byoptions.safe
. options.rem
is ignored ifoptions.add
is not given.
Both
options.add
andoptions.rem
must be encoded as big-endian sequences if given as anArrayBuffer
,SharedArrayBuffer
,TypedArray
,Buffer
, orDataView
.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigint
option istrue
, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
size: number,Generates a pseudorandom prime of
size
bits.If
options.safe
istrue
, the prime will be a safe prime -- that is,(prime - 1) / 2
will also be a prime.The
options.add
andoptions.rem
parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.add
andoptions.rem
are both set, the prime will satisfy the condition thatprime % add = rem
. - If only
options.add
is set andoptions.safe
is nottrue
, the prime will satisfy the condition thatprime % add = 1
. - If only
options.add
is set andoptions.safe
is set totrue
, the prime will instead satisfy the condition thatprime % add = 3
. This is necessary becauseprime % add = 1
foroptions.add > 2
would contradict the condition enforced byoptions.safe
. options.rem
is ignored ifoptions.add
is not given.
Both
options.add
andoptions.rem
must be encoded as big-endian sequences if given as anArrayBuffer
,SharedArrayBuffer
,TypedArray
,Buffer
, orDataView
.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigint
option istrue
, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
size: number,Generates a pseudorandom prime of
size
bits.If
options.safe
istrue
, the prime will be a safe prime -- that is,(prime - 1) / 2
will also be a prime.The
options.add
andoptions.rem
parameters can be used to enforce additional requirements, e.g., for Diffie-Hellman:- If
options.add
andoptions.rem
are both set, the prime will satisfy the condition thatprime % add = rem
. - If only
options.add
is set andoptions.safe
is nottrue
, the prime will satisfy the condition thatprime % add = 1
. - If only
options.add
is set andoptions.safe
is set totrue
, the prime will instead satisfy the condition thatprime % add = 3
. This is necessary becauseprime % add = 1
foroptions.add > 2
would contradict the condition enforced byoptions.safe
. options.rem
is ignored ifoptions.add
is not given.
Both
options.add
andoptions.rem
must be encoded as big-endian sequences if given as anArrayBuffer
,SharedArrayBuffer
,TypedArray
,Buffer
, orDataView
.By default, the prime is encoded as a big-endian sequence of octets in an ArrayBuffer. If the
bigint
option istrue
, then a bigint is provided.@param sizeThe size (in bits) of the prime to generate.
- If
- nameOrNid: string | number,
Returns information about a given cipher.
Some ciphers accept variable length keys and initialization vectors. By default, the
crypto.getCipherInfo()
method will return the default values for these ciphers. To test if a given key length or iv length is acceptable for given cipher, use thekeyLength
andivLength
options. If the given values are unacceptable,undefined
will be returned.@param nameOrNidThe name or nid of the cipher to query.
const { getCiphers, } = await import('node:crypto'); console.log(getCiphers()); // ['aes-128-cbc', 'aes-128-ccm', ...]
@returnsAn array with the names of the supported cipher algorithms.
const { getCurves, } = await import('node:crypto'); console.log(getCurves()); // ['Oakley-EC2N-3', 'Oakley-EC2N-4', ...]
@returnsAn array with the names of the supported elliptic curves.
- groupName: string
Creates a predefined
DiffieHellmanGroup
key exchange object. The supported groups are listed in the documentation forDiffieHellmanGroup
.The returned object mimics the interface of objects created by createDiffieHellman, but will not allow changing the keys (with
diffieHellman.setPublicKey()
, for example). The advantage of using this method is that the parties do not have to generate nor exchange a group modulus beforehand, saving both processor and communication time.Example (obtaining a shared secret):
const { getDiffieHellman, } = await import('node:crypto'); const alice = getDiffieHellman('modp14'); const bob = getDiffieHellman('modp14'); alice.generateKeys(); bob.generateKeys(); const aliceSecret = alice.computeSecret(bob.getPublicKey(), null, 'hex'); const bobSecret = bob.computeSecret(alice.getPublicKey(), null, 'hex'); // aliceSecret and bobSecret should be the same console.log(aliceSecret === bobSecret);
- @returns
1
if and only if a FIPS compliant crypto provider is currently in use,0
otherwise. A future semver-major release may change the return type of this API to a {boolean}. const { getHashes, } = await import('node:crypto'); console.log(getHashes()); // ['DSA', 'DSA-SHA', 'DSA-SHA1', ...]
@returnsAn array of the names of the supported hash algorithms, such as
'RSA-SHA256'
. Hash algorithms are also called "digest" algorithms.- typedArray: T): T;
A convenient alias for webcrypto.getRandomValues. This implementation is not compliant with the Web Crypto spec, to write web-compatible code use webcrypto.getRandomValues instead.
@returnsReturns
typedArray
. - algorithm: string,): string;
A utility for creating one-shot hash digests of data. It can be faster than the object-based
crypto.createHash()
when hashing a smaller amount of data (<= 5MB) that's readily available. If the data can be big or if it is streamed, it's still recommended to usecrypto.createHash()
instead. Thealgorithm
is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are'sha256'
,'sha512'
, etc. On recent releases of OpenSSL,openssl list -digest-algorithms
will display the available digest algorithms.Example:
import crypto from 'node:crypto'; import { Buffer } from 'node:buffer'; // Hashing a string and return the result as a hex-encoded string. const string = 'Node.js'; // 10b3493287f831e81a438811a1ffba01f8cec4b7 console.log(crypto.hash('sha1', string)); // Encode a base64-encoded string into a Buffer, hash it and return // the result as a buffer. const base64 = 'Tm9kZS5qcw=='; // <Buffer 10 b3 49 32 87 f8 31 e8 1a 43 88 11 a1 ff ba 01 f8 ce c4 b7> console.log(crypto.hash('sha1', Buffer.from(base64, 'base64'), 'buffer'));
@param dataWhen
data
is a string, it will be encoded as UTF-8 before being hashed. If a different input encoding is desired for a string input, user could encode the string into aTypedArray
using eitherTextEncoder
orBuffer.from()
and passing the encodedTypedArray
into this API instead.@param outputEncodingEncoding used to encode the returned digest.
algorithm: string,outputEncoding: 'buffer'A utility for creating one-shot hash digests of data. It can be faster than the object-based
crypto.createHash()
when hashing a smaller amount of data (<= 5MB) that's readily available. If the data can be big or if it is streamed, it's still recommended to usecrypto.createHash()
instead. Thealgorithm
is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are'sha256'
,'sha512'
, etc. On recent releases of OpenSSL,openssl list -digest-algorithms
will display the available digest algorithms.Example:
import crypto from 'node:crypto'; import { Buffer } from 'node:buffer'; // Hashing a string and return the result as a hex-encoded string. const string = 'Node.js'; // 10b3493287f831e81a438811a1ffba01f8cec4b7 console.log(crypto.hash('sha1', string)); // Encode a base64-encoded string into a Buffer, hash it and return // the result as a buffer. const base64 = 'Tm9kZS5qcw=='; // <Buffer 10 b3 49 32 87 f8 31 e8 1a 43 88 11 a1 ff ba 01 f8 ce c4 b7> console.log(crypto.hash('sha1', Buffer.from(base64, 'base64'), 'buffer'));
@param dataWhen
data
is a string, it will be encoded as UTF-8 before being hashed. If a different input encoding is desired for a string input, user could encode the string into aTypedArray
using eitherTextEncoder
orBuffer.from()
and passing the encodedTypedArray
into this API instead.@param outputEncodingEncoding used to encode the returned digest.
algorithm: string,A utility for creating one-shot hash digests of data. It can be faster than the object-based
crypto.createHash()
when hashing a smaller amount of data (<= 5MB) that's readily available. If the data can be big or if it is streamed, it's still recommended to usecrypto.createHash()
instead. Thealgorithm
is dependent on the available algorithms supported by the version of OpenSSL on the platform. Examples are'sha256'
,'sha512'
, etc. On recent releases of OpenSSL,openssl list -digest-algorithms
will display the available digest algorithms.Example:
import crypto from 'node:crypto'; import { Buffer } from 'node:buffer'; // Hashing a string and return the result as a hex-encoded string. const string = 'Node.js'; // 10b3493287f831e81a438811a1ffba01f8cec4b7 console.log(crypto.hash('sha1', string)); // Encode a base64-encoded string into a Buffer, hash it and return // the result as a buffer. const base64 = 'Tm9kZS5qcw=='; // <Buffer 10 b3 49 32 87 f8 31 e8 1a 43 88 11 a1 ff ba 01 f8 ce c4 b7> console.log(crypto.hash('sha1', Buffer.from(base64, 'base64'), 'buffer'));
@param dataWhen
data
is a string, it will be encoded as UTF-8 before being hashed. If a different input encoding is desired for a string input, user could encode the string into aTypedArray
using eitherTextEncoder
orBuffer.from()
and passing the encodedTypedArray
into this API instead.@param outputEncodingEncoding used to encode the returned digest.
- digest: string,keylen: number,): void;
HKDF is a simple key derivation function defined in RFC 5869. The given
ikm
,salt
andinfo
are used with thedigest
to derive a key ofkeylen
bytes.The supplied
callback
function is called with two arguments:err
andderivedKey
. If an errors occurs while deriving the key,err
will be set; otherwiseerr
will benull
. The successfully generatedderivedKey
will be passed to the callback as an ArrayBuffer. An error will be thrown if any of the input arguments specify invalid values or types.import { Buffer } from 'node:buffer'; const { hkdf, } = await import('node:crypto'); hkdf('sha512', 'key', 'salt', 'info', 64, (err, derivedKey) => { if (err) throw err; console.log(Buffer.from(derivedKey).toString('hex')); // '24156e2...5391653' });
@param digestThe digest algorithm to use.
@param saltThe salt value. Must be provided but can be zero-length.
@param infoAdditional info value. Must be provided but can be zero-length, and cannot be more than 1024 bytes.
@param keylenThe length of the key to generate. Must be greater than 0. The maximum allowable value is
255
times the number of bytes produced by the selected digest function (e.g.sha512
generates 64-byte hashes, making the maximum HKDF output 16320 bytes). - digest: string,keylen: number
Provides a synchronous HKDF key derivation function as defined in RFC 5869. The given
ikm
,salt
andinfo
are used with thedigest
to derive a key ofkeylen
bytes.The successfully generated
derivedKey
will be returned as an ArrayBuffer.An error will be thrown if any of the input arguments specify invalid values or types, or if the derived key cannot be generated.
import { Buffer } from 'node:buffer'; const { hkdfSync, } = await import('node:crypto'); const derivedKey = hkdfSync('sha512', 'key', 'salt', 'info', 64); console.log(Buffer.from(derivedKey).toString('hex')); // '24156e2...5391653'
@param digestThe digest algorithm to use.
@param ikmThe input keying material. Must be provided but can be zero-length.
@param saltThe salt value. Must be provided but can be zero-length.
@param infoAdditional info value. Must be provided but can be zero-length, and cannot be more than 1024 bytes.
@param keylenThe length of the key to generate. Must be greater than 0. The maximum allowable value is
255
times the number of bytes produced by the selected digest function (e.g.sha512
generates 64-byte hashes, making the maximum HKDF output 16320 bytes). - iterations: number,keylen: number,digest: string,): void;
Provides an asynchronous Password-Based Key Derivation Function 2 (PBKDF2) implementation. A selected HMAC digest algorithm specified by
digest
is applied to derive a key of the requested byte length (keylen
) from thepassword
,salt
anditerations
.The supplied
callback
function is called with two arguments:err
andderivedKey
. If an error occurs while deriving the key,err
will be set; otherwiseerr
will benull
. By default, the successfully generatedderivedKey
will be passed to the callback as aBuffer
. An error will be thrown if any of the input arguments specify invalid values or types.The
iterations
argument must be a number set as high as possible. The higher the number of iterations, the more secure the derived key will be, but will take a longer amount of time to complete.The
salt
should be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.When passing strings for
password
orsalt
, please considercaveats when using strings as inputs to cryptographic APIs
.const { pbkdf2, } = await import('node:crypto'); pbkdf2('secret', 'salt', 100000, 64, 'sha512', (err, derivedKey) => { if (err) throw err; console.log(derivedKey.toString('hex')); // '3745e48...08d59ae' });
An array of supported digest functions can be retrieved using getHashes.
This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the
UV_THREADPOOL_SIZE
documentation for more information. - iterations: number,keylen: number,digest: string
Provides a synchronous Password-Based Key Derivation Function 2 (PBKDF2) implementation. A selected HMAC digest algorithm specified by
digest
is applied to derive a key of the requested byte length (keylen
) from thepassword
,salt
anditerations
.If an error occurs an
Error
will be thrown, otherwise the derived key will be returned as aBuffer
.The
iterations
argument must be a number set as high as possible. The higher the number of iterations, the more secure the derived key will be, but will take a longer amount of time to complete.The
salt
should be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.When passing strings for
password
orsalt
, please considercaveats when using strings as inputs to cryptographic APIs
.const { pbkdf2Sync, } = await import('node:crypto'); const key = pbkdf2Sync('secret', 'salt', 100000, 64, 'sha512'); console.log(key.toString('hex')); // '3745e48...08d59ae'
An array of supported digest functions can be retrieved using getHashes.
- buffer: string | ArrayBufferView<ArrayBufferLike>
Decrypts
buffer
withprivateKey
.buffer
was previously encrypted using the corresponding public key, for example using publicEncrypt.If
privateKey
is not aKeyObject
, this function behaves as ifprivateKey
had been passed to createPrivateKey. If it is an object, thepadding
property can be passed. Otherwise, this function usesRSA_PKCS1_OAEP_PADDING
. - buffer: string | ArrayBufferView<ArrayBufferLike>
Encrypts
buffer
withprivateKey
. The returned data can be decrypted using the corresponding public key, for example using publicDecrypt.If
privateKey
is not aKeyObject
, this function behaves as ifprivateKey
had been passed to createPrivateKey. If it is an object, thepadding
property can be passed. Otherwise, this function usesRSA_PKCS1_PADDING
. - buffer: string | ArrayBufferView<ArrayBufferLike>
Decrypts
buffer
withkey
.buffer
was previously encrypted using the corresponding private key, for example using privateEncrypt.If
key
is not aKeyObject
, this function behaves as ifkey
had been passed to createPublicKey. If it is an object, thepadding
property can be passed. Otherwise, this function usesRSA_PKCS1_PADDING
.Because RSA public keys can be derived from private keys, a private key may be passed instead of a public key.
- buffer: string | ArrayBufferView<ArrayBufferLike>
Encrypts the content of
buffer
withkey
and returns a newBuffer
with encrypted content. The returned data can be decrypted using the corresponding private key, for example using privateDecrypt.If
key
is not aKeyObject
, this function behaves as ifkey
had been passed to createPublicKey. If it is an object, thepadding
property can be passed. Otherwise, this function usesRSA_PKCS1_OAEP_PADDING
.Because RSA public keys can be derived from private keys, a private key may be passed instead of a public key.
- size: number
Generates cryptographically strong pseudorandom data. The
size
argument is a number indicating the number of bytes to generate.If a
callback
function is provided, the bytes are generated asynchronously and thecallback
function is invoked with two arguments:err
andbuf
. If an error occurs,err
will be anError
object; otherwise it isnull
. Thebuf
argument is aBuffer
containing the generated bytes.// Asynchronous const { randomBytes, } = await import('node:crypto'); randomBytes(256, (err, buf) => { if (err) throw err; console.log(`${buf.length} bytes of random data: ${buf.toString('hex')}`); });
If the
callback
function is not provided, the random bytes are generated synchronously and returned as aBuffer
. An error will be thrown if there is a problem generating the bytes.// Synchronous const { randomBytes, } = await import('node:crypto'); const buf = randomBytes(256); console.log( `${buf.length} bytes of random data: ${buf.toString('hex')}`);
The
crypto.randomBytes()
method will not complete until there is sufficient entropy available. This should normally never take longer than a few milliseconds. The only time when generating the random bytes may conceivably block for a longer period of time is right after boot, when the whole system is still low on entropy.This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the
UV_THREADPOOL_SIZE
documentation for more information.The asynchronous version of
crypto.randomBytes()
is carried out in a single threadpool request. To minimize threadpool task length variation, partition largerandomBytes
requests when doing so as part of fulfilling a client request.@param sizeThe number of bytes to generate. The
size
must not be larger than2**31 - 1
.@returnsif the
callback
function is not provided.size: number,): void;Generates cryptographically strong pseudorandom data. The
size
argument is a number indicating the number of bytes to generate.If a
callback
function is provided, the bytes are generated asynchronously and thecallback
function is invoked with two arguments:err
andbuf
. If an error occurs,err
will be anError
object; otherwise it isnull
. Thebuf
argument is aBuffer
containing the generated bytes.// Asynchronous const { randomBytes, } = await import('node:crypto'); randomBytes(256, (err, buf) => { if (err) throw err; console.log(`${buf.length} bytes of random data: ${buf.toString('hex')}`); });
If the
callback
function is not provided, the random bytes are generated synchronously and returned as aBuffer
. An error will be thrown if there is a problem generating the bytes.// Synchronous const { randomBytes, } = await import('node:crypto'); const buf = randomBytes(256); console.log( `${buf.length} bytes of random data: ${buf.toString('hex')}`);
The
crypto.randomBytes()
method will not complete until there is sufficient entropy available. This should normally never take longer than a few milliseconds. The only time when generating the random bytes may conceivably block for a longer period of time is right after boot, when the whole system is still low on entropy.This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the
UV_THREADPOOL_SIZE
documentation for more information.The asynchronous version of
crypto.randomBytes()
is carried out in a single threadpool request. To minimize threadpool task length variation, partition largerandomBytes
requests when doing so as part of fulfilling a client request.@param sizeThe number of bytes to generate. The
size
must not be larger than2**31 - 1
.@returnsif the
callback
function is not provided. - buffer: T,): void;
This function is similar to randomBytes but requires the first argument to be a
Buffer
that will be filled. It also requires that a callback is passed in.If the
callback
function is not provided, an error will be thrown.import { Buffer } from 'node:buffer'; const { randomFill } = await import('node:crypto'); const buf = Buffer.alloc(10); randomFill(buf, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); }); randomFill(buf, 5, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); }); // The above is equivalent to the following: randomFill(buf, 5, 5, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); });
Any
ArrayBuffer
,TypedArray
, orDataView
instance may be passed asbuffer
.While this includes instances of
Float32Array
andFloat64Array
, this function should not be used to generate random floating-point numbers. The result may contain+Infinity
,-Infinity
, andNaN
, and even if the array contains finite numbers only, they are not drawn from a uniform random distribution and have no meaningful lower or upper bounds.import { Buffer } from 'node:buffer'; const { randomFill } = await import('node:crypto'); const a = new Uint32Array(10); randomFill(a, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex')); }); const b = new DataView(new ArrayBuffer(10)); randomFill(b, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex')); }); const c = new ArrayBuffer(10); randomFill(c, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf).toString('hex')); });
This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the
UV_THREADPOOL_SIZE
documentation for more information.The asynchronous version of
crypto.randomFill()
is carried out in a single threadpool request. To minimize threadpool task length variation, partition largerandomFill
requests when doing so as part of fulfilling a client request.@param bufferMust be supplied. The size of the provided
buffer
must not be larger than2**31 - 1
.@param callbackfunction(err, buf) {}
.buffer: T,offset: number,): void;This function is similar to randomBytes but requires the first argument to be a
Buffer
that will be filled. It also requires that a callback is passed in.If the
callback
function is not provided, an error will be thrown.import { Buffer } from 'node:buffer'; const { randomFill } = await import('node:crypto'); const buf = Buffer.alloc(10); randomFill(buf, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); }); randomFill(buf, 5, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); }); // The above is equivalent to the following: randomFill(buf, 5, 5, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); });
Any
ArrayBuffer
,TypedArray
, orDataView
instance may be passed asbuffer
.While this includes instances of
Float32Array
andFloat64Array
, this function should not be used to generate random floating-point numbers. The result may contain+Infinity
,-Infinity
, andNaN
, and even if the array contains finite numbers only, they are not drawn from a uniform random distribution and have no meaningful lower or upper bounds.import { Buffer } from 'node:buffer'; const { randomFill } = await import('node:crypto'); const a = new Uint32Array(10); randomFill(a, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex')); }); const b = new DataView(new ArrayBuffer(10)); randomFill(b, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex')); }); const c = new ArrayBuffer(10); randomFill(c, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf).toString('hex')); });
This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the
UV_THREADPOOL_SIZE
documentation for more information.The asynchronous version of
crypto.randomFill()
is carried out in a single threadpool request. To minimize threadpool task length variation, partition largerandomFill
requests when doing so as part of fulfilling a client request.@param bufferMust be supplied. The size of the provided
buffer
must not be larger than2**31 - 1
.@param callbackfunction(err, buf) {}
.buffer: T,offset: number,size: number,): void;This function is similar to randomBytes but requires the first argument to be a
Buffer
that will be filled. It also requires that a callback is passed in.If the
callback
function is not provided, an error will be thrown.import { Buffer } from 'node:buffer'; const { randomFill } = await import('node:crypto'); const buf = Buffer.alloc(10); randomFill(buf, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); }); randomFill(buf, 5, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); }); // The above is equivalent to the following: randomFill(buf, 5, 5, (err, buf) => { if (err) throw err; console.log(buf.toString('hex')); });
Any
ArrayBuffer
,TypedArray
, orDataView
instance may be passed asbuffer
.While this includes instances of
Float32Array
andFloat64Array
, this function should not be used to generate random floating-point numbers. The result may contain+Infinity
,-Infinity
, andNaN
, and even if the array contains finite numbers only, they are not drawn from a uniform random distribution and have no meaningful lower or upper bounds.import { Buffer } from 'node:buffer'; const { randomFill } = await import('node:crypto'); const a = new Uint32Array(10); randomFill(a, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex')); }); const b = new DataView(new ArrayBuffer(10)); randomFill(b, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex')); }); const c = new ArrayBuffer(10); randomFill(c, (err, buf) => { if (err) throw err; console.log(Buffer.from(buf).toString('hex')); });
This API uses libuv's threadpool, which can have surprising and negative performance implications for some applications; see the
UV_THREADPOOL_SIZE
documentation for more information.The asynchronous version of
crypto.randomFill()
is carried out in a single threadpool request. To minimize threadpool task length variation, partition largerandomFill
requests when doing so as part of fulfilling a client request.@param bufferMust be supplied. The size of the provided
buffer
must not be larger than2**31 - 1
.@param callbackfunction(err, buf) {}
. - buffer: T,offset?: number,size?: number): T;
Synchronous version of randomFill.
import { Buffer } from 'node:buffer'; const { randomFillSync } = await import('node:crypto'); const buf = Buffer.alloc(10); console.log(randomFillSync(buf).toString('hex')); randomFillSync(buf, 5); console.log(buf.toString('hex')); // The above is equivalent to the following: randomFillSync(buf, 5, 5); console.log(buf.toString('hex'));
Any
ArrayBuffer
,TypedArray
orDataView
instance may be passed asbuffer
.import { Buffer } from 'node:buffer'; const { randomFillSync } = await import('node:crypto'); const a = new Uint32Array(10); console.log(Buffer.from(randomFillSync(a).buffer, a.byteOffset, a.byteLength).toString('hex')); const b = new DataView(new ArrayBuffer(10)); console.log(Buffer.from(randomFillSync(b).buffer, b.byteOffset, b.byteLength).toString('hex')); const c = new ArrayBuffer(10); console.log(Buffer.from(randomFillSync(c)).toString('hex'));
@param bufferMust be supplied. The size of the provided
buffer
must not be larger than2**31 - 1
.@returnsThe object passed as
buffer
argument. - max: number): number;
Return a random integer
n
such thatmin <= n < max
. This implementation avoids modulo bias.The range (
max - min
) must be less than 2**48.min
andmax
must be safe integers.If the
callback
function is not provided, the random integer is generated synchronously.// Asynchronous const { randomInt, } = await import('node:crypto'); randomInt(3, (err, n) => { if (err) throw err; console.log(`Random number chosen from (0, 1, 2): ${n}`); });
// Synchronous const { randomInt, } = await import('node:crypto'); const n = randomInt(3); console.log(`Random number chosen from (0, 1, 2): ${n}`);
// With `min` argument const { randomInt, } = await import('node:crypto'); const n = randomInt(1, 7); console.log(`The dice rolled: ${n}`);
@param maxEnd of random range (exclusive).
min: number,max: number): number;Return a random integer
n
such thatmin <= n < max
. This implementation avoids modulo bias.The range (
max - min
) must be less than 2**48.min
andmax
must be safe integers.If the
callback
function is not provided, the random integer is generated synchronously.// Asynchronous const { randomInt, } = await import('node:crypto'); randomInt(3, (err, n) => { if (err) throw err; console.log(`Random number chosen from (0, 1, 2): ${n}`); });
// Synchronous const { randomInt, } = await import('node:crypto'); const n = randomInt(3); console.log(`Random number chosen from (0, 1, 2): ${n}`);
// With `min` argument const { randomInt, } = await import('node:crypto'); const n = randomInt(1, 7); console.log(`The dice rolled: ${n}`);
@param minStart of random range (inclusive).
@param maxEnd of random range (exclusive).
max: number,): void;Return a random integer
n
such thatmin <= n < max
. This implementation avoids modulo bias.The range (
max - min
) must be less than 2**48.min
andmax
must be safe integers.If the
callback
function is not provided, the random integer is generated synchronously.// Asynchronous const { randomInt, } = await import('node:crypto'); randomInt(3, (err, n) => { if (err) throw err; console.log(`Random number chosen from (0, 1, 2): ${n}`); });
// Synchronous const { randomInt, } = await import('node:crypto'); const n = randomInt(3); console.log(`Random number chosen from (0, 1, 2): ${n}`);
// With `min` argument const { randomInt, } = await import('node:crypto'); const n = randomInt(1, 7); console.log(`The dice rolled: ${n}`);
@param maxEnd of random range (exclusive).
@param callbackfunction(err, n) {}
.min: number,max: number,): void;Return a random integer
n
such thatmin <= n < max
. This implementation avoids modulo bias.The range (
max - min
) must be less than 2**48.min
andmax
must be safe integers.If the
callback
function is not provided, the random integer is generated synchronously.// Asynchronous const { randomInt, } = await import('node:crypto'); randomInt(3, (err, n) => { if (err) throw err; console.log(`Random number chosen from (0, 1, 2): ${n}`); });
// Synchronous const { randomInt, } = await import('node:crypto'); const n = randomInt(3); console.log(`Random number chosen from (0, 1, 2): ${n}`);
// With `min` argument const { randomInt, } = await import('node:crypto'); const n = randomInt(1, 7); console.log(`The dice rolled: ${n}`);
@param minStart of random range (inclusive).
@param maxEnd of random range (exclusive).
@param callbackfunction(err, n) {}
. - ): `${string}-${string}-${string}-${string}-${string}`;
Generates a random RFC 4122 version 4 UUID. The UUID is generated using a cryptographic pseudorandom number generator.
- keylen: number,): void;
Provides an asynchronous scrypt implementation. Scrypt is a password-based key derivation function that is designed to be expensive computationally and memory-wise in order to make brute-force attacks unrewarding.
The
salt
should be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.When passing strings for
password
orsalt
, please considercaveats when using strings as inputs to cryptographic APIs
.The
callback
function is called with two arguments:err
andderivedKey
.err
is an exception object when key derivation fails, otherwiseerr
isnull
.derivedKey
is passed to the callback as aBuffer
.An exception is thrown when any of the input arguments specify invalid values or types.
const { scrypt, } = await import('node:crypto'); // Using the factory defaults. scrypt('password', 'salt', 64, (err, derivedKey) => { if (err) throw err; console.log(derivedKey.toString('hex')); // '3745e48...08d59ae' }); // Using a custom N parameter. Must be a power of two. scrypt('password', 'salt', 64, { N: 1024 }, (err, derivedKey) => { if (err) throw err; console.log(derivedKey.toString('hex')); // '3745e48...aa39b34' });
keylen: number,): void;Provides an asynchronous scrypt implementation. Scrypt is a password-based key derivation function that is designed to be expensive computationally and memory-wise in order to make brute-force attacks unrewarding.
The
salt
should be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.When passing strings for
password
orsalt
, please considercaveats when using strings as inputs to cryptographic APIs
.The
callback
function is called with two arguments:err
andderivedKey
.err
is an exception object when key derivation fails, otherwiseerr
isnull
.derivedKey
is passed to the callback as aBuffer
.An exception is thrown when any of the input arguments specify invalid values or types.
const { scrypt, } = await import('node:crypto'); // Using the factory defaults. scrypt('password', 'salt', 64, (err, derivedKey) => { if (err) throw err; console.log(derivedKey.toString('hex')); // '3745e48...08d59ae' }); // Using a custom N parameter. Must be a power of two. scrypt('password', 'salt', 64, { N: 1024 }, (err, derivedKey) => { if (err) throw err; console.log(derivedKey.toString('hex')); // '3745e48...aa39b34' });
- keylen: number,
Provides a synchronous scrypt implementation. Scrypt is a password-based key derivation function that is designed to be expensive computationally and memory-wise in order to make brute-force attacks unrewarding.
The
salt
should be as unique as possible. It is recommended that a salt is random and at least 16 bytes long. See NIST SP 800-132 for details.When passing strings for
password
orsalt
, please considercaveats when using strings as inputs to cryptographic APIs
.An exception is thrown when key derivation fails, otherwise the derived key is returned as a
Buffer
.An exception is thrown when any of the input arguments specify invalid values or types.
const { scryptSync, } = await import('node:crypto'); // Using the factory defaults. const key1 = scryptSync('password', 'salt', 64); console.log(key1.toString('hex')); // '3745e48...08d59ae' // Using a custom N parameter. Must be a power of two. const key2 = scryptSync('password', 'salt', 64, { N: 1024 }); console.log(key2.toString('hex')); // '3745e48...aa39b34'
- engine: string,flags?: number): void;
Load and set the
engine
for some or all OpenSSL functions (selected by flags).engine
could be either an id or a path to the engine's shared library.The optional
flags
argument usesENGINE_METHOD_ALL
by default. Theflags
is a bit field taking one of or a mix of the following flags (defined incrypto.constants
):crypto.constants.ENGINE_METHOD_RSA
crypto.constants.ENGINE_METHOD_DSA
crypto.constants.ENGINE_METHOD_DH
crypto.constants.ENGINE_METHOD_RAND
crypto.constants.ENGINE_METHOD_EC
crypto.constants.ENGINE_METHOD_CIPHERS
crypto.constants.ENGINE_METHOD_DIGESTS
crypto.constants.ENGINE_METHOD_PKEY_METHS
crypto.constants.ENGINE_METHOD_PKEY_ASN1_METHS
crypto.constants.ENGINE_METHOD_ALL
crypto.constants.ENGINE_METHOD_NONE
- bool: boolean): void;
Enables the FIPS compliant crypto provider in a FIPS-enabled Node.js build. Throws an error if FIPS mode is not available.
@param booltrue
to enable FIPS mode. - algorithm: undefined | null | string,data: ArrayBufferView,
Calculates and returns the signature for
data
using the given private key and algorithm. Ifalgorithm
isnull
orundefined
, then the algorithm is dependent upon the key type (especially Ed25519 and Ed448).If
key
is not aKeyObject
, this function behaves as ifkey
had been passed to createPrivateKey. If it is an object, the following additional properties can be passed:If the
callback
function is provided this function uses libuv's threadpool.algorithm: undefined | null | string,data: ArrayBufferView,): void;Calculates and returns the signature for
data
using the given private key and algorithm. Ifalgorithm
isnull
orundefined
, then the algorithm is dependent upon the key type (especially Ed25519 and Ed448).If
key
is not aKeyObject
, this function behaves as ifkey
had been passed to createPrivateKey. If it is an object, the following additional properties can be passed:If the
callback
function is provided this function uses libuv's threadpool. - a: ArrayBufferView,b: ArrayBufferView): boolean;
This function compares the underlying bytes that represent the given
ArrayBuffer
,TypedArray
, orDataView
instances using a constant-time algorithm.This function does not leak timing information that would allow an attacker to guess one of the values. This is suitable for comparing HMAC digests or secret values like authentication cookies or capability urls.
a
andb
must both beBuffer
s,TypedArray
s, orDataView
s, and they must have the same byte length. An error is thrown ifa
andb
have different byte lengths.If at least one of
a
andb
is aTypedArray
with more than one byte per entry, such asUint16Array
, the result will be computed using the platform byte order.When both of the inputs are
Float32Array
s orFloat64Array
s, this function might return unexpected results due to IEEE 754 encoding of floating-point numbers. In particular, neitherx === y
norObject.is(x, y)
implies that the byte representations of two floating-point numbersx
andy
are equal.Use of
crypto.timingSafeEqual
does not guarantee that the surrounding code is timing-safe. Care should be taken to ensure that the surrounding code does not introduce timing vulnerabilities. - algorithm: undefined | null | string,data: ArrayBufferView,signature: ArrayBufferView): boolean;
Verifies the given signature for
data
using the given key and algorithm. Ifalgorithm
isnull
orundefined
, then the algorithm is dependent upon the key type (especially Ed25519 and Ed448).If
key
is not aKeyObject
, this function behaves as ifkey
had been passed to createPublicKey. If it is an object, the following additional properties can be passed:The
signature
argument is the previously calculated signature for thedata
.Because public keys can be derived from private keys, a private key or a public key may be passed for
key
.If the
callback
function is provided this function uses libuv's threadpool.algorithm: undefined | null | string,data: ArrayBufferView,signature: ArrayBufferView,): void;Verifies the given signature for
data
using the given key and algorithm. Ifalgorithm
isnull
orundefined
, then the algorithm is dependent upon the key type (especially Ed25519 and Ed448).If
key
is not aKeyObject
, this function behaves as ifkey
had been passed to createPublicKey. If it is an object, the following additional properties can be passed:The
signature
argument is the previously calculated signature for thedata
.Because public keys can be derived from private keys, a private key or a public key may be passed for
key
.If the
callback
function is provided this function uses libuv's threadpool.
Type definitions
- type: 'rsa',): void;
Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'rsa-pss',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'dsa',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ec',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'ed448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x25519',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.type: 'x448',): void;Generates a new asymmetric key pair of the given
type
. RSA, RSA-PSS, DSA, EC, Ed25519, Ed448, X25519, X448, and DH are currently supported.If a
publicKeyEncoding
orprivateKeyEncoding
was specified, this function behaves as ifkeyObject.export()
had been called on its result. Otherwise, the respective part of the key is returned as aKeyObject
.It is recommended to encode public keys as
'spki'
and private keys as'pkcs8'
with encryption for long-term storage:const { generateKeyPair, } = await import('node:crypto'); generateKeyPair('rsa', { modulusLength: 4096, publicKeyEncoding: { type: 'spki', format: 'pem', }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: 'top secret', }, }, (err, publicKey, privateKey) => { // Handle errors and use the generated key pair. });
On completion,
callback
will be called witherr
set toundefined
andpublicKey
/privateKey
representing the generated key pair.If this method is invoked as its
util.promisify()
ed version, it returns aPromise
for anObject
withpublicKey
andprivateKey
properties.@param typeMust be
'rsa'
,'rsa-pss'
,'dsa'
,'ec'
,'ed25519'
,'ed448'
,'x25519'
,'x448'
, or'dh'
.namespace generateKeyPair
namespace webcrypto
An implementation of the Web Crypto API standard.
See the Web Crypto API documentation for details.
interface AesCbcParams
interface AesCtrParams
interface AesDerivedKeyParams
interface AesGcmParams
interface AesKeyAlgorithm
interface AesKeyGenParams
interface Crypto
Importing the
webcrypto
object (import { webcrypto } from 'node:crypto'
) gives an instance of theCrypto
class.Crypto
is a singleton that provides access to the remainder of the crypto API.- getRandomValues<T extends Uint8Array<ArrayBufferLike> | Uint8ClampedArray<ArrayBufferLike> | Uint16Array<ArrayBufferLike> | Uint32Array<ArrayBufferLike> | Int8Array<ArrayBufferLike> | Int16Array<ArrayBufferLike> | Int32Array<ArrayBufferLike> | BigUint64Array<ArrayBufferLike> | BigInt64Array<ArrayBufferLike>>(typedArray: T): T;
Generates cryptographically strong random values. The given
typedArray
is filled with random values, and a reference totypedArray
is returned.The given
typedArray
must be an integer-based instance of NodeJS.TypedArray, i.e.Float32Array
andFloat64Array
are not accepted.An error will be thrown if the given
typedArray
is larger than 65,536 bytes. Generates a random RFC 4122 version 4 UUID. The UUID is generated using a cryptographic pseudorandom number generator.
interface CryptoKey
- readonly algorithm: KeyAlgorithm
An object detailing the algorithm for which the key can be used along with additional algorithm-specific parameters.
- readonly extractable: boolean
When
true
, the CryptoKey can be extracted using eithersubtleCrypto.exportKey()
orsubtleCrypto.wrapKey()
. - readonly usages: KeyUsage[]
An array of strings identifying the operations for which the key may be used.
The possible usages are:
'encrypt'
- The key may be used to encrypt data.'decrypt'
- The key may be used to decrypt data.'sign'
- The key may be used to generate digital signatures.'verify'
- The key may be used to verify digital signatures.'deriveKey'
- The key may be used to derive a new key.'deriveBits'
- The key may be used to derive bits.'wrapKey'
- The key may be used to wrap another key.'unwrapKey'
- The key may be used to unwrap another key.
Valid key usages depend on the key algorithm (identified by
cryptokey.algorithm.name
).
interface CryptoKeyConstructor
interface CryptoKeyPair
The
CryptoKeyPair
is a simple dictionary object withpublicKey
andprivateKey
properties, representing an asymmetric key pair.interface EcdhKeyDeriveParams
interface EcdsaParams
interface EcKeyAlgorithm
interface EcKeyGenParams
interface EcKeyImportParams
interface Ed448Params
interface HkdfParams
interface HmacImportParams
interface HmacKeyAlgorithm
interface HmacKeyGenParams
interface KeyAlgorithm
interface Pbkdf2Params
interface RsaHashedImportParams
interface RsaHashedKeyAlgorithm
interface RsaHashedKeyGenParams
interface RsaKeyAlgorithm
interface RsaKeyGenParams
interface RsaOaepParams
interface RsaOtherPrimesInfo
interface RsaPssParams
interface SubtleCrypto
Using the method and parameters specified in
algorithm
and the keying material provided bykey
,subtle.decrypt()
attempts to decipher the provideddata
. If successful, the returned promise will be resolved with an<ArrayBuffer>
containing the plaintext result.The algorithms currently supported include:
'RSA-OAEP'
'AES-CTR'
'AES-CBC'
'AES-GCM'
- length?: null | number
Using the method and parameters specified in
algorithm
and the keying material provided bybaseKey
,subtle.deriveBits()
attempts to generatelength
bits. The Node.js implementation requires that whenlength
is a number it must be multiple of8
. Whenlength
isnull
the maximum number of bits for a given algorithm is generated. This is allowed for the'ECDH'
,'X25519'
, and'X448'
algorithms. If successful, the returned promise will be resolved with an<ArrayBuffer>
containing the generated data.The algorithms currently supported include:
'ECDH'
'X25519'
'X448'
'HKDF'
'PBKDF2'
length: number - extractable: boolean,
Using the method and parameters specified in
algorithm
, and the keying material provided bybaseKey
,subtle.deriveKey()
attempts to generate a new <CryptoKey>based on the method and parameters in
derivedKeyAlgorithm`.Calling
subtle.deriveKey()
is equivalent to callingsubtle.deriveBits()
to generate raw keying material, then passing the result into thesubtle.importKey()
method using thederiveKeyAlgorithm
,extractable
, andkeyUsages
parameters as input.The algorithms currently supported include:
'ECDH'
'X25519'
'X448'
'HKDF'
'PBKDF2'
@param keyUsagesSee Key usages.
Using the method identified by
algorithm
,subtle.digest()
attempts to generate a digest ofdata
. If successful, the returned promise is resolved with an<ArrayBuffer>
containing the computed digest.If
algorithm
is provided as a<string>
, it must be one of:'SHA-1'
'SHA-256'
'SHA-384'
'SHA-512'
If
algorithm
is provided as an<Object>
, it must have aname
property whose value is one of the above.Using the method and parameters specified by
algorithm
and the keying material provided bykey
,subtle.encrypt()
attempts to encipherdata
. If successful, the returned promise is resolved with an<ArrayBuffer>
containing the encrypted result.The algorithms currently supported include:
'RSA-OAEP'
'AES-CTR'
'AES-CBC'
'AES-GCM'
- format: 'jwk',
Exports the given key into the specified format, if supported.
If the
<CryptoKey>
is not extractable, the returned promise will reject.When
format
is either'pkcs8'
or'spki'
and the export is successful, the returned promise will be resolved with an<ArrayBuffer>
containing the exported key data.When
format
is'jwk'
and the export is successful, the returned promise will be resolved with a JavaScript object conforming to the JSON Web Key specification.@param formatMust be one of
'raw'
,'pkcs8'
,'spki'
, or'jwk'
.@returns<Promise>
containing<ArrayBuffer>
. - extractable: boolean,
Using the method and parameters provided in
algorithm
,subtle.generateKey()
attempts to generate new keying material. Depending the method used, the method may generate either a single<CryptoKey>
or a<CryptoKeyPair>
.The
<CryptoKeyPair>
(public and private key) generating algorithms supported include:'RSASSA-PKCS1-v1_5'
'RSA-PSS'
'RSA-OAEP'
'ECDSA'
'Ed25519'
'Ed448'
'ECDH'
'X25519'
'X448'
The<CryptoKey>
(secret key) generating algorithms supported include:'HMAC'
'AES-CTR'
'AES-CBC'
'AES-GCM'
'AES-KW'
@param keyUsagesSee Key usages.
extractable: boolean,extractable: boolean, - format: 'jwk',algorithm: AlgorithmIdentifier | RsaHashedImportParams | EcKeyImportParams | HmacImportParams | AesKeyAlgorithm,extractable: boolean,
The
subtle.importKey()
method attempts to interpret the providedkeyData
as the givenformat
to create a<CryptoKey>
instance using the providedalgorithm
,extractable
, andkeyUsages
arguments. If the import is successful, the returned promise will be resolved with the created<CryptoKey>
.If importing a
'PBKDF2'
key,extractable
must befalse
.@param formatMust be one of
'raw'
,'pkcs8'
,'spki'
, or'jwk'
.@param keyUsagesSee Key usages.
format: 'spki' | 'pkcs8' | 'raw',algorithm: AlgorithmIdentifier | RsaHashedImportParams | EcKeyImportParams | HmacImportParams | AesKeyAlgorithm,extractable: boolean, - sign(
Using the method and parameters given by
algorithm
and the keying material provided bykey
,subtle.sign()
attempts to generate a cryptographic signature ofdata
. If successful, the returned promise is resolved with an<ArrayBuffer>
containing the generated signature.The algorithms currently supported include:
'RSASSA-PKCS1-v1_5'
'RSA-PSS'
'ECDSA'
'Ed25519'
'Ed448'
'HMAC'
- unwrappedKeyAlgorithm: AlgorithmIdentifier | RsaHashedImportParams | EcKeyImportParams | HmacImportParams | AesKeyAlgorithm,extractable: boolean,
In cryptography, "wrapping a key" refers to exporting and then encrypting the keying material. The
subtle.unwrapKey()
method attempts to decrypt a wrapped key and create a<CryptoKey>
instance. It is equivalent to callingsubtle.decrypt()
first on the encrypted key data (using thewrappedKey
,unwrapAlgo
, andunwrappingKey
arguments as input) then passing the results in to thesubtle.importKey()
method using theunwrappedKeyAlgo
,extractable
, andkeyUsages
arguments as inputs. If successful, the returned promise is resolved with a<CryptoKey>
object.The wrapping algorithms currently supported include:
'RSA-OAEP'
'AES-CTR'
'AES-CBC'
'AES-GCM'
'AES-KW'
The unwrapped key algorithms supported include:
'RSASSA-PKCS1-v1_5'
'RSA-PSS'
'RSA-OAEP'
'ECDSA'
'Ed25519'
'Ed448'
'ECDH'
'X25519'
'X448'
'HMAC'
'AES-CTR'
'AES-CBC'
'AES-GCM'
'AES-KW'
@param formatMust be one of
'raw'
,'pkcs8'
,'spki'
, or'jwk'
.@param keyUsagesSee Key usages.
- ): Promise<boolean>;
Using the method and parameters given in
algorithm
and the keying material provided bykey
,subtle.verify()
attempts to verify thatsignature
is a valid cryptographic signature ofdata
. The returned promise is resolved with eithertrue
orfalse
.The algorithms currently supported include:
'RSASSA-PKCS1-v1_5'
'RSA-PSS'
'ECDSA'
'Ed25519'
'Ed448'
'HMAC'
In cryptography, "wrapping a key" refers to exporting and then encrypting the keying material. The
subtle.wrapKey()
method exports the keying material into the format identified byformat
, then encrypts it using the method and parameters specified bywrapAlgo
and the keying material provided bywrappingKey
. It is the equivalent to callingsubtle.exportKey()
usingformat
andkey
as the arguments, then passing the result to thesubtle.encrypt()
method usingwrappingKey
andwrapAlgo
as inputs. If successful, the returned promise will be resolved with an<ArrayBuffer>
containing the encrypted key data.The wrapping algorithms currently supported include:
'RSA-OAEP'
'AES-CTR'
'AES-CBC'
'AES-GCM'
'AES-KW'
@param formatMust be one of
'raw'
,'pkcs8'
,'spki'
, or'jwk'
.
- type AlgorithmIdentifier = Algorithm | string
- type BigInteger = Uint8Array
- type BufferSource = ArrayBufferView | ArrayBuffer
- type KeyFormat = 'jwk' | 'pkcs8' | 'raw' | 'spki'
- type KeyType = 'private' | 'public' | 'secret'
- type KeyUsage = 'decrypt' | 'deriveBits' | 'deriveKey' | 'encrypt' | 'sign' | 'unwrapKey' | 'verify' | 'wrapKey'
- type NamedCurve = string
interface AsymmetricKeyDetails
interface BasePrivateKeyEncodingOptions<T extends KeyFormat>
interface CheckPrimeOptions
- checks?: number
The number of Miller-Rabin probabilistic primality iterations to perform. When the value is 0 (zero), a number of checks is used that yields a false positive rate of at most
2**-64
for random input. Care must be used when selecting a number of checks. Refer to the OpenSSL documentation for the BN_is_prime_ex function nchecks options for more details.
interface CipherCCM
Instances of the
Cipher
class are used to encrypt data. The class can be used in one of two ways:- As a
stream
that is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or - Using the
cipher.update()
andcipher.final()
methods to produce the encrypted data.
The createCipheriv method is used to create
Cipher
instances.Cipher
objects are not to be created directly using thenew
keyword.Example: Using
Cipher
objects as streams:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; // Once we have the key and iv, we can create and use the cipher... const cipher = createCipheriv(algorithm, key, iv); let encrypted = ''; cipher.setEncoding('hex'); cipher.on('data', (chunk) => encrypted += chunk); cipher.on('end', () => console.log(encrypted)); cipher.write('some clear text data'); cipher.end(); }); });
Example: Using
Cipher
and piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { pipeline, } from 'node:stream'; const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); const input = createReadStream('test.js'); const output = createWriteStream('test.enc'); pipeline(input, cipher, output, (err) => { if (err) throw err; }); }); });
Example: Using the
cipher.update()
andcipher.final()
methods:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); let encrypted = cipher.update('some clear text data', 'utf8', 'hex'); encrypted += cipher.final('hex'); console.log(encrypted); }); });
- allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
cipher.final()
method has been called, theCipher
object can no longer be used to encrypt data. Attempts to callcipher.final()
more than once will result in an error being thrown.@returnsAny remaining enciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
cipher.final()
method has been called, theCipher
object can no longer be used to encrypt data. Attempts to callcipher.final()
more than once will result in an error being thrown.@param outputEncodingThe
encoding
of the return value.@returnsAny remaining enciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- autoPadding?: boolean): this;
When using block encryption algorithms, the
Cipher
class will automatically add padding to the input data to the appropriate block size. To disable the default padding callcipher.setAutoPadding(false)
.When
autoPadding
isfalse
, the length of the entire input data must be a multiple of the cipher's block size orcipher.final()
will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using0x0
instead of PKCS padding.The
cipher.setAutoPadding()
method must be called beforecipher.final()
.@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.data: string,Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.@param outputEncodingThe
encoding
of the return value.data: string,): string;Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.@param outputEncodingThe
encoding
of the return value.- wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.
- As a
interface CipherCCMOptions
- signal?: AbortSignal
When provided the corresponding
AbortController
can be used to cancel an asynchronous action.
interface CipherChaCha20Poly1305
Instances of the
Cipher
class are used to encrypt data. The class can be used in one of two ways:- As a
stream
that is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or - Using the
cipher.update()
andcipher.final()
methods to produce the encrypted data.
The createCipheriv method is used to create
Cipher
instances.Cipher
objects are not to be created directly using thenew
keyword.Example: Using
Cipher
objects as streams:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; // Once we have the key and iv, we can create and use the cipher... const cipher = createCipheriv(algorithm, key, iv); let encrypted = ''; cipher.setEncoding('hex'); cipher.on('data', (chunk) => encrypted += chunk); cipher.on('end', () => console.log(encrypted)); cipher.write('some clear text data'); cipher.end(); }); });
Example: Using
Cipher
and piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { pipeline, } from 'node:stream'; const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); const input = createReadStream('test.js'); const output = createWriteStream('test.enc'); pipeline(input, cipher, output, (err) => { if (err) throw err; }); }); });
Example: Using the
cipher.update()
andcipher.final()
methods:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); let encrypted = cipher.update('some clear text data', 'utf8', 'hex'); encrypted += cipher.final('hex'); console.log(encrypted); }); });
- allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
cipher.final()
method has been called, theCipher
object can no longer be used to encrypt data. Attempts to callcipher.final()
more than once will result in an error being thrown.@returnsAny remaining enciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
cipher.final()
method has been called, theCipher
object can no longer be used to encrypt data. Attempts to callcipher.final()
more than once will result in an error being thrown.@param outputEncodingThe
encoding
of the return value.@returnsAny remaining enciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- autoPadding?: boolean): this;
When using block encryption algorithms, the
Cipher
class will automatically add padding to the input data to the appropriate block size. To disable the default padding callcipher.setAutoPadding(false)
.When
autoPadding
isfalse
, the length of the entire input data must be a multiple of the cipher's block size orcipher.final()
will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using0x0
instead of PKCS padding.The
cipher.setAutoPadding()
method must be called beforecipher.final()
.@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.data: string,Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.@param outputEncodingThe
encoding
of the return value.data: string,): string;Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.@param outputEncodingThe
encoding
of the return value.- wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.
- As a
interface CipherChaCha20Poly1305Options
- signal?: AbortSignal
When provided the corresponding
AbortController
can be used to cancel an asynchronous action.
interface CipherGCM
Instances of the
Cipher
class are used to encrypt data. The class can be used in one of two ways:- As a
stream
that is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or - Using the
cipher.update()
andcipher.final()
methods to produce the encrypted data.
The createCipheriv method is used to create
Cipher
instances.Cipher
objects are not to be created directly using thenew
keyword.Example: Using
Cipher
objects as streams:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; // Once we have the key and iv, we can create and use the cipher... const cipher = createCipheriv(algorithm, key, iv); let encrypted = ''; cipher.setEncoding('hex'); cipher.on('data', (chunk) => encrypted += chunk); cipher.on('end', () => console.log(encrypted)); cipher.write('some clear text data'); cipher.end(); }); });
Example: Using
Cipher
and piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { pipeline, } from 'node:stream'; const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); const input = createReadStream('test.js'); const output = createWriteStream('test.enc'); pipeline(input, cipher, output, (err) => { if (err) throw err; }); }); });
Example: Using the
cipher.update()
andcipher.final()
methods:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); let encrypted = cipher.update('some clear text data', 'utf8', 'hex'); encrypted += cipher.final('hex'); console.log(encrypted); }); });
- allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
cipher.final()
method has been called, theCipher
object can no longer be used to encrypt data. Attempts to callcipher.final()
more than once will result in an error being thrown.@returnsAny remaining enciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
cipher.final()
method has been called, theCipher
object can no longer be used to encrypt data. Attempts to callcipher.final()
more than once will result in an error being thrown.@param outputEncodingThe
encoding
of the return value.@returnsAny remaining enciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- autoPadding?: boolean): this;
When using block encryption algorithms, the
Cipher
class will automatically add padding to the input data to the appropriate block size. To disable the default padding callcipher.setAutoPadding(false)
.When
autoPadding
isfalse
, the length of the entire input data must be a multiple of the cipher's block size orcipher.final()
will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using0x0
instead of PKCS padding.The
cipher.setAutoPadding()
method must be called beforecipher.final()
.@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.data: string,Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.@param outputEncodingThe
encoding
of the return value.data: string,): string;Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.@param outputEncodingThe
encoding
of the return value.- wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.
- As a
interface CipherGCMOptions
- signal?: AbortSignal
When provided the corresponding
AbortController
can be used to cancel an asynchronous action.
interface CipherInfo
interface CipherInfoOptions
interface CipherOCB
Instances of the
Cipher
class are used to encrypt data. The class can be used in one of two ways:- As a
stream
that is both readable and writable, where plain unencrypted data is written to produce encrypted data on the readable side, or - Using the
cipher.update()
andcipher.final()
methods to produce the encrypted data.
The createCipheriv method is used to create
Cipher
instances.Cipher
objects are not to be created directly using thenew
keyword.Example: Using
Cipher
objects as streams:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; // Once we have the key and iv, we can create and use the cipher... const cipher = createCipheriv(algorithm, key, iv); let encrypted = ''; cipher.setEncoding('hex'); cipher.on('data', (chunk) => encrypted += chunk); cipher.on('end', () => console.log(encrypted)); cipher.write('some clear text data'); cipher.end(); }); });
Example: Using
Cipher
and piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { pipeline, } from 'node:stream'; const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); const input = createReadStream('test.js'); const output = createWriteStream('test.enc'); pipeline(input, cipher, output, (err) => { if (err) throw err; }); }); });
Example: Using the
cipher.update()
andcipher.final()
methods:const { scrypt, randomFill, createCipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // First, we'll generate the key. The key length is dependent on the algorithm. // In this case for aes192, it is 24 bytes (192 bits). scrypt(password, 'salt', 24, (err, key) => { if (err) throw err; // Then, we'll generate a random initialization vector randomFill(new Uint8Array(16), (err, iv) => { if (err) throw err; const cipher = createCipheriv(algorithm, key, iv); let encrypted = cipher.update('some clear text data', 'utf8', 'hex'); encrypted += cipher.final('hex'); console.log(encrypted); }); });
- allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
cipher.final()
method has been called, theCipher
object can no longer be used to encrypt data. Attempts to callcipher.final()
more than once will result in an error being thrown.@returnsAny remaining enciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
cipher.final()
method has been called, theCipher
object can no longer be used to encrypt data. Attempts to callcipher.final()
more than once will result in an error being thrown.@param outputEncodingThe
encoding
of the return value.@returnsAny remaining enciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- autoPadding?: boolean): this;
When using block encryption algorithms, the
Cipher
class will automatically add padding to the input data to the appropriate block size. To disable the default padding callcipher.setAutoPadding(false)
.When
autoPadding
isfalse
, the length of the entire input data must be a multiple of the cipher's block size orcipher.final()
will throw an error. Disabling automatic padding is useful for non-standard padding, for instance using0x0
instead of PKCS padding.The
cipher.setAutoPadding()
method must be called beforecipher.final()
.@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.data: string,Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.@param outputEncodingThe
encoding
of the return value.data: string,): string;Updates the cipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
,TypedArray
, orDataView
. Ifdata
is aBuffer
,TypedArray
, orDataView
, theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
cipher.update()
method can be called multiple times with new data untilcipher.final()
is called. Callingcipher.update()
aftercipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of the data.@param outputEncodingThe
encoding
of the return value.- wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.
- As a
interface CipherOCBOptions
- signal?: AbortSignal
When provided the corresponding
AbortController
can be used to cancel an asynchronous action.
interface DecipherCCM
Instances of the
Decipher
class are used to decrypt data. The class can be used in one of two ways:- As a
stream
that is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or - Using the
decipher.update()
anddecipher.final()
methods to produce the unencrypted data.
The createDecipheriv method is used to create
Decipher
instances.Decipher
objects are not to be created directly using thenew
keyword.Example: Using
Decipher
objects as streams:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Key length is dependent on the algorithm. In this case for aes192, it is // 24 bytes (192 bits). // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); let decrypted = ''; decipher.on('readable', () => { let chunk; while (null !== (chunk = decipher.read())) { decrypted += chunk.toString('utf8'); } }); decipher.on('end', () => { console.log(decrypted); // Prints: some clear text data }); // Encrypted with same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; decipher.write(encrypted, 'hex'); decipher.end();
Example: Using
Decipher
and piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); const input = createReadStream('test.enc'); const output = createWriteStream('test.js'); input.pipe(decipher).pipe(output);
Example: Using the
decipher.update()
anddecipher.final()
methods:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); // Encrypted using same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; let decrypted = decipher.update(encrypted, 'hex', 'utf8'); decrypted += decipher.final('utf8'); console.log(decrypted); // Prints: some clear text data
- allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
decipher.final()
method has been called, theDecipher
object can no longer be used to decrypt data. Attempts to calldecipher.final()
more than once will result in an error being thrown.@returnsAny remaining deciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
decipher.final()
method has been called, theDecipher
object can no longer be used to decrypt data. Attempts to calldecipher.final()
more than once will result in an error being thrown.@param outputEncodingThe
encoding
of the return value.@returnsAny remaining deciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- auto_padding?: boolean): this;
When data has been encrypted without standard block padding, calling
decipher.setAutoPadding(false)
will disable automatic padding to preventdecipher.final()
from checking for and removing padding.Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.
The
decipher.setAutoPadding()
method must be called beforedecipher.final()
.@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. - data: ArrayBufferView
Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.data: string,Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.@param outputEncodingThe
encoding
of the return value.data: string,): string;Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.@param outputEncodingThe
encoding
of the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.
- As a
interface DecipherChaCha20Poly1305
Instances of the
Decipher
class are used to decrypt data. The class can be used in one of two ways:- As a
stream
that is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or - Using the
decipher.update()
anddecipher.final()
methods to produce the unencrypted data.
The createDecipheriv method is used to create
Decipher
instances.Decipher
objects are not to be created directly using thenew
keyword.Example: Using
Decipher
objects as streams:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Key length is dependent on the algorithm. In this case for aes192, it is // 24 bytes (192 bits). // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); let decrypted = ''; decipher.on('readable', () => { let chunk; while (null !== (chunk = decipher.read())) { decrypted += chunk.toString('utf8'); } }); decipher.on('end', () => { console.log(decrypted); // Prints: some clear text data }); // Encrypted with same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; decipher.write(encrypted, 'hex'); decipher.end();
Example: Using
Decipher
and piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); const input = createReadStream('test.enc'); const output = createWriteStream('test.js'); input.pipe(decipher).pipe(output);
Example: Using the
decipher.update()
anddecipher.final()
methods:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); // Encrypted using same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; let decrypted = decipher.update(encrypted, 'hex', 'utf8'); decrypted += decipher.final('utf8'); console.log(decrypted); // Prints: some clear text data
- allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
decipher.final()
method has been called, theDecipher
object can no longer be used to decrypt data. Attempts to calldecipher.final()
more than once will result in an error being thrown.@returnsAny remaining deciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
decipher.final()
method has been called, theDecipher
object can no longer be used to decrypt data. Attempts to calldecipher.final()
more than once will result in an error being thrown.@param outputEncodingThe
encoding
of the return value.@returnsAny remaining deciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- auto_padding?: boolean): this;
When data has been encrypted without standard block padding, calling
decipher.setAutoPadding(false)
will disable automatic padding to preventdecipher.final()
from checking for and removing padding.Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.
The
decipher.setAutoPadding()
method must be called beforedecipher.final()
.@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. - data: ArrayBufferView
Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.data: string,Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.@param outputEncodingThe
encoding
of the return value.data: string,): string;Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.@param outputEncodingThe
encoding
of the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.
- As a
interface DecipherGCM
Instances of the
Decipher
class are used to decrypt data. The class can be used in one of two ways:- As a
stream
that is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or - Using the
decipher.update()
anddecipher.final()
methods to produce the unencrypted data.
The createDecipheriv method is used to create
Decipher
instances.Decipher
objects are not to be created directly using thenew
keyword.Example: Using
Decipher
objects as streams:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Key length is dependent on the algorithm. In this case for aes192, it is // 24 bytes (192 bits). // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); let decrypted = ''; decipher.on('readable', () => { let chunk; while (null !== (chunk = decipher.read())) { decrypted += chunk.toString('utf8'); } }); decipher.on('end', () => { console.log(decrypted); // Prints: some clear text data }); // Encrypted with same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; decipher.write(encrypted, 'hex'); decipher.end();
Example: Using
Decipher
and piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); const input = createReadStream('test.enc'); const output = createWriteStream('test.js'); input.pipe(decipher).pipe(output);
Example: Using the
decipher.update()
anddecipher.final()
methods:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); // Encrypted using same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; let decrypted = decipher.update(encrypted, 'hex', 'utf8'); decrypted += decipher.final('utf8'); console.log(decrypted); // Prints: some clear text data
- allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
decipher.final()
method has been called, theDecipher
object can no longer be used to decrypt data. Attempts to calldecipher.final()
more than once will result in an error being thrown.@returnsAny remaining deciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
decipher.final()
method has been called, theDecipher
object can no longer be used to decrypt data. Attempts to calldecipher.final()
more than once will result in an error being thrown.@param outputEncodingThe
encoding
of the return value.@returnsAny remaining deciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- auto_padding?: boolean): this;
When data has been encrypted without standard block padding, calling
decipher.setAutoPadding(false)
will disable automatic padding to preventdecipher.final()
from checking for and removing padding.Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.
The
decipher.setAutoPadding()
method must be called beforedecipher.final()
.@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. - data: ArrayBufferView
Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.data: string,Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.@param outputEncodingThe
encoding
of the return value.data: string,): string;Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.@param outputEncodingThe
encoding
of the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.
- As a
interface DecipherOCB
Instances of the
Decipher
class are used to decrypt data. The class can be used in one of two ways:- As a
stream
that is both readable and writable, where plain encrypted data is written to produce unencrypted data on the readable side, or - Using the
decipher.update()
anddecipher.final()
methods to produce the unencrypted data.
The createDecipheriv method is used to create
Decipher
instances.Decipher
objects are not to be created directly using thenew
keyword.Example: Using
Decipher
objects as streams:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Key length is dependent on the algorithm. In this case for aes192, it is // 24 bytes (192 bits). // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); let decrypted = ''; decipher.on('readable', () => { let chunk; while (null !== (chunk = decipher.read())) { decrypted += chunk.toString('utf8'); } }); decipher.on('end', () => { console.log(decrypted); // Prints: some clear text data }); // Encrypted with same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; decipher.write(encrypted, 'hex'); decipher.end();
Example: Using
Decipher
and piped streams:import { createReadStream, createWriteStream, } from 'node:fs'; import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); const input = createReadStream('test.enc'); const output = createWriteStream('test.js'); input.pipe(decipher).pipe(output);
Example: Using the
decipher.update()
anddecipher.final()
methods:import { Buffer } from 'node:buffer'; const { scryptSync, createDecipheriv, } = await import('node:crypto'); const algorithm = 'aes-192-cbc'; const password = 'Password used to generate key'; // Use the async `crypto.scrypt()` instead. const key = scryptSync(password, 'salt', 24); // The IV is usually passed along with the ciphertext. const iv = Buffer.alloc(16, 0); // Initialization vector. const decipher = createDecipheriv(algorithm, key, iv); // Encrypted using same algorithm, key and iv. const encrypted = 'e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa'; let decrypted = decipher.update(encrypted, 'hex', 'utf8'); decrypted += decipher.final('utf8'); console.log(decrypted); // Prints: some clear text data
- allowHalfOpen: boolean
If
false
then the stream will automatically end the writable side when the readable side ends. Set initially by theallowHalfOpen
constructor option, which defaults totrue
.This can be changed manually to change the half-open behavior of an existing
Duplex
stream instance, but must be changed before the'end'
event is emitted. - readable: boolean
Is
true
if it is safe to call read, which means the stream has not been destroyed or emitted'error'
or'end'
. - readonly readableAborted: boolean
Returns whether the stream was destroyed or errored before emitting
'end'
. - readonly readableEncoding: null | BufferEncoding
Getter for the property
encoding
of a givenReadable
stream. Theencoding
property can be set using the setEncoding method. - readonly readableFlowing: null | boolean
This property reflects the current state of a
Readable
stream as described in the Three states section. - readonly readableHighWaterMark: number
Returns the value of
highWaterMark
passed when creating thisReadable
. - readonly readableLength: number
This property contains the number of bytes (or objects) in the queue ready to be read. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writable: boolean
Is
true
if it is safe to callwritable.write()
, which means the stream has not been destroyed, errored, or ended. - readonly writableCorked: number
Number of times
writable.uncork()
needs to be called in order to fully uncork the stream. - readonly writableEnded: boolean
Is
true
afterwritable.end()
has been called. This property does not indicate whether the data has been flushed, for this usewritable.writableFinished
instead. - readonly writableHighWaterMark: number
Return the value of
highWaterMark
passed when creating thisWritable
. - readonly writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the
highWaterMark
. - readonly writableNeedDrain: boolean
Is
true
if the stream's buffer has been full and stream will emit'drain'
. Calls
readable.destroy()
with anAbortError
and returns a promise that fulfills when the stream is finished.- event: 'close',listener: () => void): this;
Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'data',listener: (chunk: any) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'drain',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'end',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'error',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'finish',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pause',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'pipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'readable',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'resume',listener: () => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: 'unpipe',): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
event: string | symbol,listener: (...args: any[]) => void): this;Event emitter The defined events on documents including:
- close
- data
- drain
- end
- error
- finish
- pause
- pipe
- readable
- resume
- unpipe
This method returns a new stream with chunks of the underlying stream paired with a counter in the form
[index, chunk]
. The first index value is0
and it increases by 1 for each chunk produced.@returnsa stream of indexed pairs.
- stream: ComposeFnParam | T | Iterable<T, any, any> | AsyncIterable<T, any, any>,): T;
The
writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.The primary intent of
writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass them all towritable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.See also:
writable.uncork()
,writable._writev()
.- ): this;
Destroy the stream. Optionally emit an
'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readable stream will release any internal resources and subsequent calls topush()
will be ignored.Once
destroy()
has been called any further calls will be a no-op and no further errors except from_destroy()
may be emitted as'error'
.Implementors should not override this method, but instead implement
readable._destroy()
.@param errorError which will be passed as payload in
'error'
event - drop(limit: number,
This method returns a new stream with the first limit chunks dropped from the start.
@param limitthe number of chunks to drop from the readable.
@returnsa stream with limit chunks dropped from the start.
- emit(event: 'close'): boolean;
Synchronously calls each of the listeners registered for the event named
eventName
, in the order they were registered, passing the supplied arguments to each.Returns
true
if the event had listeners,false
otherwise.import { EventEmitter } from 'node:events'; const myEmitter = new EventEmitter(); // First listener myEmitter.on('event', function firstListener() { console.log('Helloooo! first listener'); }); // Second listener myEmitter.on('event', function secondListener(arg1, arg2) { console.log(`event with parameters ${arg1}, ${arg2} in second listener`); }); // Third listener myEmitter.on('event', function thirdListener(...args) { const parameters = args.join(', '); console.log(`event with parameters ${parameters} in third listener`); }); console.log(myEmitter.listeners('event')); myEmitter.emit('event', 1, 2, 3, 4, 5); // Prints: // [ // [Function: firstListener], // [Function: secondListener], // [Function: thirdListener] // ] // Helloooo! first listener // event with parameters 1, 2 in second listener // event with parameters 1, 2, 3, 4, 5 in third listener
- end(cb?: () => void): this;
Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
end(chunk: any,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.end(chunk: any,encoding: BufferEncoding,cb?: () => void): this;Calling the
writable.end()
method signals that no more data will be written to theWritable
. The optionalchunk
andencoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed!
@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding if
chunk
is a string Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or
Symbol
s.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => {}); myEE.on('bar', () => {}); const sym = Symbol('symbol'); myEE.on(sym, () => {}); console.log(myEE.eventNames()); // Prints: [ 'foo', 'bar', Symbol(symbol) ]
- ): Promise<boolean>;
This method is similar to
Array.prototype.every
and calls fn on each chunk in the stream to check if all awaited return values are truthy value for fn. Once an fn call on a chunkawait
ed return value is falsy, the stream is destroyed and the promise is fulfilled withfalse
. If all of the fn calls on the chunks return a truthy value, the promise is fulfilled withtrue
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for every one of the chunks. This method allows filtering the stream. For each chunk in the stream the fn function will be called and if it returns a truthy value, the chunk will be passed to the result stream. If the fn function returns a promise - that promise will be
await
ed.@param fna function to filter chunks from the stream. Async or not.
@returnsa stream filtered with the predicate fn.
Once the
decipher.final()
method has been called, theDecipher
object can no longer be used to decrypt data. Attempts to calldecipher.final()
more than once will result in an error being thrown.@returnsAny remaining deciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.outputEncoding: BufferEncoding): string;Once the
decipher.final()
method has been called, theDecipher
object can no longer be used to decrypt data. Attempts to calldecipher.final()
more than once will result in an error being thrown.@param outputEncodingThe
encoding
of the return value.@returnsAny remaining deciphered contents. If
outputEncoding
is specified, a string is returned. If anoutputEncoding
is not provided, a Buffer is returned.- ): Promise<undefined | T>;
This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found.find(): Promise<any>;This method is similar to
Array.prototype.find
and calls fn on each chunk in the stream to find a chunk with a truthy value for fn. Once an fn call's awaited return value is truthy, the stream is destroyed and the promise is fulfilled with value for which fn returned a truthy value. If all of the fn calls on the chunks return a falsy value, the promise is fulfilled withundefined
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to the first chunk for which fn evaluated with a truthy value, or
undefined
if no element was found. This method returns a new stream by applying the given callback to each chunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable from fn and the result streams will be merged (flattened) into the returned stream.
@param fna function to map over every chunk in the stream. May be async. May be a stream or generator.
@returnsa stream flat-mapped with the function fn.
- ): Promise<void>;
This method allows iterating a stream. For each chunk in the stream the fn function will be called. If the fn function returns a promise - that promise will be
await
ed.This method is different from
for await...of
loops in that it can optionally process chunks concurrently. In addition, aforEach
iteration can only be stopped by having passed asignal
option and aborting the related AbortController whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.This method is different from listening to the
'data'
event in that it uses thereadable
event in the underlying machinary and can limit the number of concurrent fn calls.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise for when the stream has finished.
Returns the current max listener value for the
EventEmitter
which is either set byemitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.The
readable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason to use this method directly.const readable = new stream.Readable(); readable.isPaused(); // === false readable.pause(); readable.isPaused(); // === true readable.resume(); readable.isPaused(); // === false
- options?: { destroyOnReturn: boolean }): AsyncIterator<any>;
The iterator created by this method gives users the option to cancel the destruction of the stream if the
for await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the stream emitted an error during iteration. - eventName: string | symbol,listener?: Function): number;
Returns the number of listeners listening for the event named
eventName
. Iflistener
is provided, it will return how many times the listener is found in the list of the listeners of the event.@param eventNameThe name of the event being listened for
@param listenerThe event handler function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
.server.on('connection', (stream) => { console.log('someone connected!'); }); console.log(util.inspect(server.listeners('connection'))); // Prints: [ [Function] ]
- map(
This method allows mapping over the stream. The fn function will be called for every chunk in the stream. If the fn function returns a promise - that promise will be
await
ed before being passed to the result stream.@param fna function to map over every chunk in the stream. Async or not.
@returnsa stream mapped with the function fn.
- eventName: string | symbol,listener: (...args: any[]) => void): this;
Alias for
emitter.removeListener()
. - on(event: 'close',listener: () => void): this;
Adds the
listener
function to the end of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.on('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.on('foo', () => console.log('a')); myEE.prependListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
- once(event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
. The next timeeventName
is triggered, this listener is removed and then invoked.server.once('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.By default, event listeners are invoked in the order they are added. The
emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.import { EventEmitter } from 'node:events'; const myEE = new EventEmitter(); myEE.once('foo', () => console.log('a')); myEE.prependOnceListener('foo', () => console.log('b')); myEE.emit('foo'); // Prints: // b // a
@param listenerThe callback function
The
readable.pause()
method will cause a stream in flowing mode to stop emitting'data'
events, switching out of flowing mode. Any data that becomes available will remain in the internal buffer.const readable = getReadableStreamSomehow(); readable.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); readable.pause(); console.log('There will be no additional data for 1 second.'); setTimeout(() => { console.log('Now data will start flowing again.'); readable.resume(); }, 1000); });
The
readable.pause()
method has no effect if there is a'readable'
event listener.- event: 'close',listener: () => void): this;
Adds the
listener
function to the beginning of the listeners array for the event namedeventName
. No checks are made to see if thelistener
has already been added. Multiple calls passing the same combination ofeventName
andlistener
will result in thelistener
being added, and called, multiple times.server.prependListener('connection', (stream) => { console.log('someone connected!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- event: 'close',listener: () => void): this;
Adds a one-time
listener
function for the event namedeventName
to the beginning of the listeners array. The next timeeventName
is triggered, this listener is removed, and then invoked.server.prependOnceListener('connection', (stream) => { console.log('Ah, we have our first user!'); });
Returns a reference to the
EventEmitter
, so that calls can be chained.@param listenerThe callback function
- eventName: string | symbol): Function[];
Returns a copy of the array of listeners for the event named
eventName
, including any wrappers (such as those created by.once()
).import { EventEmitter } from 'node:events'; const emitter = new EventEmitter(); emitter.once('log', () => console.log('log once')); // Returns a new Array with a function `onceWrapper` which has a property // `listener` which contains the original listener bound above const listeners = emitter.rawListeners('log'); const logFnWrapper = listeners[0]; // Logs "log once" to the console and does not unbind the `once` event logFnWrapper.listener(); // Logs "log once" to the console and removes the listener logFnWrapper(); emitter.on('log', () => console.log('log persistently')); // Will return a new Array with a single function bound by `.on()` above const newListeners = emitter.rawListeners('log'); // Logs "log persistently" twice newListeners[0](); emitter.emit('log');
- read(size?: number): any;
The
readable.read()
method reads data out of the internal buffer and returns it. If no data is available to be read,null
is returned. By default, the data is returned as aBuffer
object unless an encoding has been specified using thereadable.setEncoding()
method or the stream is operating in object mode.The optional
size
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returned unless the stream has ended, in which case all of the data remaining in the internal buffer will be returned.If the
size
argument is not specified, all of the data contained in the internal buffer will be returned.The
size
argument must be less than or equal to 1 GiB.The
readable.read()
method should only be called onReadable
streams operating in paused mode. In flowing mode,readable.read()
is called automatically until the internal buffer is fully drained.const readable = getReadableStreamSomehow(); // 'readable' may be triggered multiple times as data is buffered in readable.on('readable', () => { let chunk; console.log('Stream is readable (new data received in buffer)'); // Use a loop to make sure we read all currently available data while (null !== (chunk = readable.read())) { console.log(`Read ${chunk.length} bytes of data...`); } }); // 'end' will be triggered once when there is no more data available readable.on('end', () => { console.log('Reached end of stream.'); });
Each call to
readable.read()
returns a chunk of data, ornull
. The chunks are not concatenated. Awhile
loop is necessary to consume all data currently in the buffer. When reading a large file.read()
may returnnull
, having consumed all buffered content so far, but there is still more data to come not yet buffered. In this case a new'readable'
event will be emitted when there is more data in the buffer. Finally the'end'
event will be emitted when there is no more data to come.Therefore to read a file's whole contents from a
readable
, it is necessary to collect chunks across multiple'readable'
events:const chunks = []; readable.on('readable', () => { let chunk; while (null !== (chunk = readable.read())) { chunks.push(chunk); } }); readable.on('end', () => { const content = chunks.join(''); });
A
Readable
stream in object mode will always return a single item from a call toreadable.read(size)
, regardless of the value of thesize
argument.If the
readable.read()
method returns a chunk of data, a'data'
event will also be emitted.Calling read after the
'end'
event has been emitted will returnnull
. No runtime error will be raised.@param sizeOptional argument to specify how much data to read.
- initial?: undefined,): Promise<T>;
This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
initial: T,): Promise<T>;This method calls fn on each chunk of the stream in order, passing it the result from the calculation on the previous element. It returns a promise for the final value of the reduction.
If no initial value is supplied the first chunk of the stream is used as the initial value. If the stream is empty, the promise is rejected with a
TypeError
with theERR_INVALID_ARGS
code property.The reducer function iterates the stream element-by-element which means that there is no concurrency parameter or parallelism. To perform a reduce concurrently, you can extract the async function to
readable.map
method.@param fna reducer function to call over every chunk in the stream. Async or not.
@param initialthe initial value to use in the reduction.
@returnsa promise for the final value of the reduction.
- eventName?: string | symbol): this;
Removes all listeners, or those of the specified
eventName
.It is bad practice to remove listeners added elsewhere in the code, particularly when the
EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).Returns a reference to the
EventEmitter
, so that calls can be chained. - event: 'close',listener: () => void): this;
Removes the specified
listener
from the listener array for the event namedeventName
.const callback = (stream) => { console.log('someone connected!'); }; server.on('connection', callback); // ... server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specifiedeventName
, thenremoveListener()
must be called multiple times to remove each instance.Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any
removeListener()
orremoveAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.import { EventEmitter } from 'node:events'; class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter(); const callbackA = () => { console.log('A'); myEmitter.removeListener('event', callbackB); }; const callbackB = () => { console.log('B'); }; myEmitter.on('event', callbackA); myEmitter.on('event', callbackB); // callbackA removes listener callbackB but it will still be called. // Internal listener array at time of emit [callbackA, callbackB] myEmitter.emit('event'); // Prints: // A // B // callbackB is now removed. // Internal listener array [callbackA] myEmitter.emit('event'); // Prints: // A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the
emitter.listeners()
method will need to be recreated.When a single function has been added as a handler multiple times for a single event (as in the example below),
removeListener()
will remove the most recently added instance. In the example theonce('ping')
listener is removed:import { EventEmitter } from 'node:events'; const ee = new EventEmitter(); function pong() { console.log('pong'); } ee.on('ping', pong); ee.once('ping', pong); ee.removeListener('ping', pong); ee.emit('ping'); ee.emit('ping');
Returns a reference to the
EventEmitter
, so that calls can be chained. The
readable.resume()
method causes an explicitly pausedReadable
stream to resume emitting'data'
events, switching the stream into flowing mode.The
readable.resume()
method can be used to fully consume the data from a stream without actually processing any of that data:getReadableStreamSomehow() .resume() .on('end', () => { console.log('Reached the end, but did not read anything.'); });
The
readable.resume()
method has no effect if there is a'readable'
event listener.- auto_padding?: boolean): this;
When data has been encrypted without standard block padding, calling
decipher.setAutoPadding(false)
will disable automatic padding to preventdecipher.final()
from checking for and removing padding.Turning auto padding off will only work if the input data's length is a multiple of the ciphers block size.
The
decipher.setAutoPadding()
method must be called beforedecipher.final()
.@returnsfor method chaining.
- encoding: BufferEncoding): this;
The
writable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.@param encodingThe new default encoding
- encoding: BufferEncoding): this;
The
readable.setEncoding()
method sets the character encoding for data read from theReadable
stream.By default, no encoding is assigned and stream data will be returned as
Buffer
objects. Setting an encoding causes the stream data to be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause the output data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimal string format.The
Readable
stream will properly handle multi-byte characters delivered through the stream that would otherwise become improperly decoded if simply pulled from the stream asBuffer
objects.const readable = getReadableStreamSomehow(); readable.setEncoding('utf8'); readable.on('data', (chunk) => { assert.equal(typeof chunk, 'string'); console.log('Got %d characters of string data:', chunk.length); });
@param encodingThe encoding to use.
- n: number): this;
By default
EventEmitter
s will print a warning if more than10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. Theemitter.setMaxListeners()
method allows the limit to be modified for this specificEventEmitter
instance. The value can be set toInfinity
(or0
) to indicate an unlimited number of listeners.Returns a reference to the
EventEmitter
, so that calls can be chained. - some(): Promise<boolean>;
This method is similar to
Array.prototype.some
and calls fn on each chunk in the stream until the awaited return value istrue
(or any truthy value). Once an fn call on a chunkawait
ed return value is truthy, the stream is destroyed and the promise is fulfilled withtrue
. If none of the fn calls on the chunks return a truthy value, the promise is fulfilled withfalse
.@param fna function to call on each chunk of the stream. Async or not.
@returnsa promise evaluating to
true
if fn returned a truthy value for at least one of the chunks. - @param limit
the number of chunks to take from the readable.
@returnsa stream with limit chunks taken.
- ): Promise<any[]>;
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits of streams. It's intended for interoperability and convenience, not as the primary way to consume streams.
@returnsa promise containing an array with the contents of the stream.
The
writable.uncork()
method flushes all data buffered since cork was called.When using
writable.cork()
andwritable.uncork()
to manage the buffering of writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork());
If the
writable.cork()
method is called multiple times on a stream, the same number of calls towritable.uncork()
must be called to flush the buffered data.stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); });
See also:
writable.cork()
.- destination?: WritableStream): this;
The
readable.unpipe()
method detaches aWritable
stream previously attached using the pipe method.If the
destination
is not specified, then all pipes are detached.If the
destination
is specified, but no pipe is set up for it, then the method does nothing.import fs from 'node:fs'; const readable = getReadableStreamSomehow(); const writable = fs.createWriteStream('file.txt'); // All the data from readable goes into 'file.txt', // but only for the first second. readable.pipe(writable); setTimeout(() => { console.log('Stop writing to file.txt.'); readable.unpipe(writable); console.log('Manually close the file stream.'); writable.end(); }, 1000);
@param destinationOptional specific stream to unpipe
- chunk: any,encoding?: BufferEncoding): void;
Passing
chunk
asnull
signals the end of the stream (EOF) and behaves the same asreadable.push(null)
, after which no more data can be written. The EOF signal is put at the end of the buffer and any buffered data will still be flushed.The
readable.unshift()
method pushes a chunk of data back into the internal buffer. This is useful in certain situations where a stream is being consumed by code that needs to "un-consume" some amount of data that it has optimistically pulled out of the source, so that the data can be passed on to some other party.The
stream.unshift(chunk)
method cannot be called after the'end'
event has been emitted or a runtime error will be thrown.Developers using
stream.unshift()
often should consider switching to use of aTransform
stream instead. See theAPI for stream implementers
section for more information.// Pull off a header delimited by \n\n. // Use unshift() if we get too much. // Call the callback with (error, header, stream). import { StringDecoder } from 'node:string_decoder'; function parseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable); const decoder = new StringDecoder('utf8'); let header = ''; function onReadable() { let chunk; while (null !== (chunk = stream.read())) { const str = decoder.write(chunk); if (str.includes('\n\n')) { // Found the header boundary. const split = str.split(/\n\n/); header += split.shift(); const remaining = split.join('\n\n'); const buf = Buffer.from(remaining, 'utf8'); stream.removeListener('error', callback); // Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable); if (buf.length) stream.unshift(buf); // Now the body of the message can be read from the stream. callback(null, header, stream); return; } // Still reading the header. header += str; } } }
Unlike push,
stream.unshift(chunk)
will not end the reading process by resetting the internal reading state of the stream. This can cause unexpected results ifreadable.unshift()
is called during a read (i.e. from within a _read implementation on a custom stream). Following the call toreadable.unshift()
with an immediate push will reset the reading state appropriately, however it is best to simply avoid callingreadable.unshift()
while in the process of performing a read.@param chunkChunk of data to unshift onto the read queue. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray}, {DataView} ornull
. For object mode streams,chunk
may be any JavaScript value.@param encodingEncoding of string chunks. Must be a valid
Buffer
encoding, such as'utf8'
or'ascii'
. - data: ArrayBufferView
Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.data: string,Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.data: ArrayBufferView,inputEncoding: undefined,): string;Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.@param outputEncodingThe
encoding
of the return value.data: string,): string;Updates the decipher with
data
. If theinputEncoding
argument is given, thedata
argument is a string using the specified encoding. If theinputEncoding
argument is not given,data
must be aBuffer
. Ifdata
is aBuffer
theninputEncoding
is ignored.The
outputEncoding
specifies the output format of the enciphered data. If theoutputEncoding
is specified, a string using the specified encoding is returned. If nooutputEncoding
is provided, aBuffer
is returned.The
decipher.update()
method can be called multiple times with new data untildecipher.final()
is called. Callingdecipher.update()
afterdecipher.final()
will result in an error being thrown.@param inputEncodingThe
encoding
of thedata
string.@param outputEncodingThe
encoding
of the return value. - wrap(stream: ReadableStream): this;
Prior to Node.js 0.10, streams did not implement the entire
node:stream
module API as it is currently defined. (SeeCompatibility
for more information.)When using an older Node.js library that emits
'data'
events and has a pause method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that uses the old stream as its data source.It will rarely be necessary to use
readable.wrap()
but the method has been provided as a convenience for interacting with older Node.js applications and libraries.import { OldReader } from './old-api-module.js'; import { Readable } from 'node:stream'; const oreader = new OldReader(); const myReader = new Readable().wrap(oreader); myReader.on('readable', () => { myReader.read(); // etc. });
@param streamAn "old style" readable stream
- chunk: any,): boolean;
The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.chunk: any,encoding: BufferEncoding,): boolean;The
writable.write()
method writes some data to the stream, and calls the suppliedcallback
once the data has been fully handled. If an error occurs, thecallback
will be called with the error as its first argument. Thecallback
is called asynchronously and before'error'
is emitted.The return value is
true
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
. Iffalse
is returned, further attempts to write data to the stream should stop until the'drain'
event is emitted.While a stream is not draining, calls to
write()
will bufferchunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the'drain'
event will be emitted. Oncewrite()
returns false, do not write more chunks until the'drain'
event is emitted. While callingwrite()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.Writing data while the stream is not draining is particularly problematic for a
Transform
, because theTransform
streams are paused by default until they are piped or a'data'
or'readable'
event handler is added.If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a
Readable
and use pipe. However, if callingwrite()
is preferred, it is possible to respect backpressure and avoid memory issues using the'drain'
event:function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); });
A
Writable
stream in object mode will always ignore theencoding
argument.@param chunkOptional data to write. For streams not operating in object mode,
chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams,chunk
may be any JavaScript value other thannull
.@param encodingThe encoding, if
chunk
is a string.@param callbackCallback for when this chunk of data is flushed.
@returnsfalse
if the stream wishes for the calling code to wait for the'drain'
event to be emitted before continuing to write additional data; otherwisetrue
.
- As a
interface DiffieHellmanGroupConstructor
interface DSAKeyPairKeyObjectOptions
interface DSAKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface ECKeyPairKeyObjectOptions
interface ECKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface ED25519KeyPairKeyObjectOptions
interface ED25519KeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface ED448KeyPairKeyObjectOptions
interface ED448KeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface GeneratePrimeOptions
interface GeneratePrimeOptionsArrayBuffer
interface GeneratePrimeOptionsBigInt
interface HashOptions
- outputLength?: number
For XOF hash functions such as
shake256
, the outputLength option can be used to specify the desired output length in bytes. - signal?: AbortSignal
When provided the corresponding
AbortController
can be used to cancel an asynchronous action.
interface JsonWebKeyInput
interface JwkKeyExportOptions
interface KeyExportOptions<T extends KeyFormat>
interface KeyPairSyncResult<T1 extends string | Buffer, T2 extends string | Buffer>
interface PrivateKeyInput
interface RandomUUIDOptions
- disableEntropyCache?: boolean
By default, to improve performance, Node.js will pre-emptively generate and persistently cache enough random data to generate up to 128 random UUIDs. To generate a UUID without using the cache, set
disableEntropyCache
totrue
.
interface RSAKeyPairKeyObjectOptions
interface RSAKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface RsaPrivateKey
interface RSAPSSKeyPairKeyObjectOptions
interface RSAPSSKeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface RsaPublicKey
interface ScryptOptions
interface SecureHeapUsage
interface SigningOptions
interface SignJsonWebKeyInput
interface SignKeyObjectInput
interface SignPrivateKeyInput
interface VerifyJsonWebKeyInput
interface VerifyKeyObjectInput
interface VerifyPublicKeyInput
interface X25519KeyPairKeyObjectOptions
interface X25519KeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface X448KeyPairKeyObjectOptions
interface X448KeyPairOptions<PubF extends KeyFormat, PrivF extends KeyFormat>
interface X509CheckOptions
- type BinaryLike = string | NodeJS.ArrayBufferView
- type BinaryToTextEncoding = 'base64' | 'base64url' | 'hex' | 'binary'
- type CharacterEncoding = 'utf8' | 'utf-8' | 'utf16le' | 'utf-16le' | 'latin1'
- type CipherCCMTypes = 'aes-128-ccm' | 'aes-192-ccm' | 'aes-256-ccm'
- type CipherChaCha20Poly1305Types = 'chacha20-poly1305'
- type CipherGCMTypes = 'aes-128-gcm' | 'aes-192-gcm' | 'aes-256-gcm'
- type CipherKey = BinaryLike | KeyObject
- type CipherMode = 'cbc' | 'ccm' | 'cfb' | 'ctr' | 'ecb' | 'gcm' | 'ocb' | 'ofb' | 'stream' | 'wrap' | 'xts'
- type CipherOCBTypes = 'aes-128-ocb' | 'aes-192-ocb' | 'aes-256-ocb'
- type DiffieHellmanGroup = Omit<DiffieHellman, 'setPublicKey' | 'setPrivateKey'>
- type DSAEncoding = 'der' | 'ieee-p1363'
- type ECDHKeyFormat = 'compressed' | 'uncompressed' | 'hybrid'
- type KeyFormat = 'pem' | 'der' | 'jwk'
- type KeyObjectType = 'secret' | 'public' | 'private'
- type KeyType = 'rsa' | 'rsa-pss' | 'dsa' | 'ec' | 'ed25519' | 'ed448' | 'x25519' | 'x448'
- type LargeNumberLike = NodeJS.ArrayBufferView | SharedArrayBuffer | ArrayBuffer | bigint
- type LegacyCharacterEncoding = 'ascii' | 'binary' | 'ucs2' | 'ucs-2'
- type UUID = `${string}-${string}-${string}-${string}-${string}`