- Extends
stream.Writable
Instances of fs.WriteStream
are created and returned using the createWriteStream function.
class
stream.Writable
Instances of fs.WriteStream
are created and returned using the createWriteStream function.
The number of bytes written so far. Does not include data that is still queued for writing.
This property is true
if the underlying file has not been opened yet, i.e. before the 'ready'
event is emitted.
Is true
if it is safe to call writable.write()
, which means the stream has not been destroyed, errored, or ended.
Number of times writable.uncork()
needs to be called in order to fully uncork the stream.
Is true
after writable.end()
has been called. This property does not indicate whether the data has been flushed, for this use writable.writableFinished
instead.
Return the value of highWaterMark
passed when creating this Writable
.
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the highWaterMark
.
Is true
if the stream's buffer has been full and stream will emit 'drain'
.
Value: boolean
Change the default captureRejections
option on all new EventEmitter
objects.
Value: Symbol.for('nodejs.rejection')
See how to write a custom rejection handler
.
By default, a maximum of 10
listeners can be registered for any single event. This limit can be changed for individual EventEmitter
instances using the emitter.setMaxListeners(n)
method. To change the default for allEventEmitter
instances, the events.defaultMaxListeners
property can be used. If this value is not a positive number, a RangeError
is thrown.
Take caution when setting the events.defaultMaxListeners
because the change affects all EventEmitter
instances, including those created before the change is made. However, calling emitter.setMaxListeners(n)
still has precedence over events.defaultMaxListeners
.
This is not a hard limit. The EventEmitter
instance will allow more listeners to be added but will output a trace warning to stderr indicating that a "possible EventEmitter memory leak" has been detected. For any single EventEmitter
, the emitter.getMaxListeners()
and emitter.setMaxListeners()
methods can be used to temporarily avoid this warning:
import { EventEmitter } from 'node:events';
const emitter = new EventEmitter();
emitter.setMaxListeners(emitter.getMaxListeners() + 1);
emitter.once('event', () => {
// do stuff
emitter.setMaxListeners(Math.max(emitter.getMaxListeners() - 1, 0));
});
The --trace-warnings
command-line flag can be used to display the stack trace for such warnings.
The emitted warning can be inspected with process.on('warning')
and will have the additional emitter
, type
, and count
properties, referring to the event emitter instance, the event's name and the number of attached listeners, respectively. Its name
property is set to 'MaxListenersExceededWarning'
.
This symbol shall be used to install a listener for only monitoring 'error'
events. Listeners installed using this symbol are called before the regular 'error'
listeners are called.
Installing a listener using this symbol does not change the behavior once an 'error'
event is emitted. Therefore, the process will still crash if no regular 'error'
listener is installed.
events.EventEmitter
Closes writeStream
. Optionally accepts a callback that will be executed once the writeStream
is closed.
The writable.cork()
method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called.
The primary intent of writable.cork()
is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, writable.cork()
buffers all the chunks until writable.uncork()
is called, which will pass them all to writable._writev()
, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of writable.cork()
without implementing writable._writev()
may have an adverse effect on throughput.
See also: writable.uncork()
, writable._writev()
.
Destroy the stream. Optionally emit an 'error'
event, and emit a 'close'
event (unless emitClose
is set to false
). After this call, the writable stream has ended and subsequent calls to write()
or end()
will result in an ERR_STREAM_DESTROYED
error. This is a destructive and immediate way to destroy a stream. Previous calls to write()
may not have drained, and may trigger an ERR_STREAM_DESTROYED
error. Use end()
instead of destroy if data should flush before close, or wait for the 'drain'
event before destroying the stream.
Once destroy()
has been called any further calls will be a no-op and no further errors except from _destroy()
may be emitted as 'error'
.
Implementors should not override this method, but instead implement writable._destroy()
.
Optional, an error to emit with 'error'
event.
Synchronously calls each of the listeners registered for the event named eventName
, in the order they were registered, passing the supplied arguments to each.
Returns true
if the event had listeners, false
otherwise.
import { EventEmitter } from 'node:events';
const myEmitter = new EventEmitter();
// First listener
myEmitter.on('event', function firstListener() {
console.log('Helloooo! first listener');
});
// Second listener
myEmitter.on('event', function secondListener(arg1, arg2) {
console.log(`event with parameters ${arg1}, ${arg2} in second listener`);
});
// Third listener
myEmitter.on('event', function thirdListener(...args) {
const parameters = args.join(', ');
console.log(`event with parameters ${parameters} in third listener`);
});
console.log(myEmitter.listeners('event'));
myEmitter.emit('event', 1, 2, 3, 4, 5);
// Prints:
// [
// [Function: firstListener],
// [Function: secondListener],
// [Function: thirdListener]
// ]
// Helloooo! first listener
// event with parameters 1, 2 in second listener
// event with parameters 1, 2, 3, 4, 5 in third listener
Calling the writable.end()
method signals that no more data will be written to the Writable
. The optional chunk
and encoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.
Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'.
import fs from 'node:fs';
const file = fs.createWriteStream('example.txt');
file.write('hello, ');
file.end('world!');
// Writing more now is not allowed!
Calling the writable.end()
method signals that no more data will be written to the Writable
. The optional chunk
and encoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.
Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'.
import fs from 'node:fs';
const file = fs.createWriteStream('example.txt');
file.write('hello, ');
file.end('world!');
// Writing more now is not allowed!
Optional data to write. For streams not operating in object mode, chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk
may be any JavaScript value other than null
.
Calling the writable.end()
method signals that no more data will be written to the Writable
. The optional chunk
and encoding
arguments allow one final additional chunk of data to be written immediately before closing the stream.
Calling the write method after calling end will raise an error.
// Write 'hello, ' and then end with 'world!'.
import fs from 'node:fs';
const file = fs.createWriteStream('example.txt');
file.write('hello, ');
file.end('world!');
// Writing more now is not allowed!
Optional data to write. For streams not operating in object mode, chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk
may be any JavaScript value other than null
.
The encoding if chunk
is a string
Returns an array listing the events for which the emitter has registered listeners. The values in the array are strings or Symbol
s.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.on('foo', () => {});
myEE.on('bar', () => {});
const sym = Symbol('symbol');
myEE.on(sym, () => {});
console.log(myEE.eventNames());
// Prints: [ 'foo', 'bar', Symbol(symbol) ]
Returns the current max listener value for the EventEmitter
which is either set by emitter.setMaxListeners(n)
or defaults to EventEmitter.defaultMaxListeners.
Returns the number of listeners listening for the event named eventName
. If listener
is provided, it will return how many times the listener is found in the list of the listeners of the event.
The name of the event being listened for
The event handler function
Returns a copy of the array of listeners for the event named eventName
.
server.on('connection', (stream) => {
console.log('someone connected!');
});
console.log(util.inspect(server.listeners('connection')));
// Prints: [ [Function] ]
Alias for emitter.removeListener()
.
Adds the listener
function to the end of the listeners array for the event named eventName
. No checks are made to see if the listener
has already been added. Multiple calls passing the same combination of eventName
and listener
will result in the listener
being added, and called, multiple times.
server.on('connection', (stream) => {
console.log('someone connected!');
});
Returns a reference to the EventEmitter
, so that calls can be chained.
By default, event listeners are invoked in the order they are added. The emitter.prependListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.on('foo', () => console.log('a'));
myEE.prependListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
The callback function
Adds a one-time listener
function for the event named eventName
. The next time eventName
is triggered, this listener is removed and then invoked.
server.once('connection', (stream) => {
console.log('Ah, we have our first user!');
});
Returns a reference to the EventEmitter
, so that calls can be chained.
By default, event listeners are invoked in the order they are added. The emitter.prependOnceListener()
method can be used as an alternative to add the event listener to the beginning of the listeners array.
import { EventEmitter } from 'node:events';
const myEE = new EventEmitter();
myEE.once('foo', () => console.log('a'));
myEE.prependOnceListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
The callback function
Adds the listener
function to the beginning of the listeners array for the event named eventName
. No checks are made to see if the listener
has already been added. Multiple calls passing the same combination of eventName
and listener
will result in the listener
being added, and called, multiple times.
server.prependListener('connection', (stream) => {
console.log('someone connected!');
});
Returns a reference to the EventEmitter
, so that calls can be chained.
The callback function
Adds a one-timelistener
function for the event named eventName
to the beginning of the listeners array. The next time eventName
is triggered, this listener is removed, and then invoked.
server.prependOnceListener('connection', (stream) => {
console.log('Ah, we have our first user!');
});
Returns a reference to the EventEmitter
, so that calls can be chained.
The callback function
Returns a copy of the array of listeners for the event named eventName
, including any wrappers (such as those created by .once()
).
import { EventEmitter } from 'node:events';
const emitter = new EventEmitter();
emitter.once('log', () => console.log('log once'));
// Returns a new Array with a function `onceWrapper` which has a property
// `listener` which contains the original listener bound above
const listeners = emitter.rawListeners('log');
const logFnWrapper = listeners[0];
// Logs "log once" to the console and does not unbind the `once` event
logFnWrapper.listener();
// Logs "log once" to the console and removes the listener
logFnWrapper();
emitter.on('log', () => console.log('log persistently'));
// Will return a new Array with a single function bound by `.on()` above
const newListeners = emitter.rawListeners('log');
// Logs "log persistently" twice
newListeners[0]();
emitter.emit('log');
Removes all listeners, or those of the specified eventName
.
It is bad practice to remove listeners added elsewhere in the code, particularly when the EventEmitter
instance was created by some other component or module (e.g. sockets or file streams).
Returns a reference to the EventEmitter
, so that calls can be chained.
Removes the specified listener
from the listener array for the event named eventName
.
const callback = (stream) => {
console.log('someone connected!');
};
server.on('connection', callback);
// ...
server.removeListener('connection', callback);
removeListener()
will remove, at most, one instance of a listener from the listener array. If any single listener has been added multiple times to the listener array for the specified eventName
, then removeListener()
must be called multiple times to remove each instance.
Once an event is emitted, all listeners attached to it at the time of emitting are called in order. This implies that any removeListener()
or removeAllListeners()
calls after emitting and before the last listener finishes execution will not remove them fromemit()
in progress. Subsequent events behave as expected.
import { EventEmitter } from 'node:events';
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
const callbackA = () => {
console.log('A');
myEmitter.removeListener('event', callbackB);
};
const callbackB = () => {
console.log('B');
};
myEmitter.on('event', callbackA);
myEmitter.on('event', callbackB);
// callbackA removes listener callbackB but it will still be called.
// Internal listener array at time of emit [callbackA, callbackB]
myEmitter.emit('event');
// Prints:
// A
// B
// callbackB is now removed.
// Internal listener array [callbackA]
myEmitter.emit('event');
// Prints:
// A
Because listeners are managed using an internal array, calling this will change the position indices of any listener registered after the listener being removed. This will not impact the order in which listeners are called, but it means that any copies of the listener array as returned by the emitter.listeners()
method will need to be recreated.
When a single function has been added as a handler multiple times for a single event (as in the example below), removeListener()
will remove the most recently added instance. In the example the once('ping')
listener is removed:
import { EventEmitter } from 'node:events';
const ee = new EventEmitter();
function pong() {
console.log('pong');
}
ee.on('ping', pong);
ee.once('ping', pong);
ee.removeListener('ping', pong);
ee.emit('ping');
ee.emit('ping');
Returns a reference to the EventEmitter
, so that calls can be chained.
The writable.setDefaultEncoding()
method sets the default encoding
for a Writable
stream.
The new default encoding
By default EventEmitter
s will print a warning if more than 10
listeners are added for a particular event. This is a useful default that helps finding memory leaks. The emitter.setMaxListeners()
method allows the limit to be modified for this specific EventEmitter
instance. The value can be set to Infinity
(or 0
) to indicate an unlimited number of listeners.
Returns a reference to the EventEmitter
, so that calls can be chained.
The writable.uncork()
method flushes all data buffered since cork was called.
When using writable.cork()
and writable.uncork()
to manage the buffering of writes to a stream, defer calls to writable.uncork()
using process.nextTick()
. Doing so allows batching of all writable.write()
calls that occur within a given Node.js event loop phase.
stream.cork();
stream.write('some ');
stream.write('data ');
process.nextTick(() => stream.uncork());
If the writable.cork()
method is called multiple times on a stream, the same number of calls to writable.uncork()
must be called to flush the buffered data.
stream.cork();
stream.write('some ');
stream.cork();
stream.write('data ');
process.nextTick(() => {
stream.uncork();
// The data will not be flushed until uncork() is called a second time.
stream.uncork();
});
See also: writable.cork()
.
The writable.write()
method writes some data to the stream, and calls the supplied callback
once the data has been fully handled. If an error occurs, the callback
will be called with the error as its first argument. The callback
is called asynchronously and before 'error'
is emitted.
The return value is true
if the internal buffer is less than the highWaterMark
configured when the stream was created after admitting chunk
. If false
is returned, further attempts to write data to the stream should stop until the 'drain'
event is emitted.
While a stream is not draining, calls to write()
will buffer chunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain'
event will be emitted. Once write()
returns false, do not write more chunks until the 'drain'
event is emitted. While calling write()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.
Writing data while the stream is not draining is particularly problematic for a Transform
, because the Transform
streams are paused by default until they are piped or a 'data'
or 'readable'
event handler is added.
If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable
and use pipe. However, if calling write()
is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain'
event:
function write(data, cb) {
if (!stream.write(data)) {
stream.once('drain', cb);
} else {
process.nextTick(cb);
}
}
// Wait for cb to be called before doing any other write.
write('hello', () => {
console.log('Write completed, do more writes now.');
});
A Writable
stream in object mode will always ignore the encoding
argument.
Optional data to write. For streams not operating in object mode, chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk
may be any JavaScript value other than null
.
Callback for when this chunk of data is flushed.
false
if the stream wishes for the calling code to wait for the 'drain'
event to be emitted before continuing to write additional data; otherwise true
.
The writable.write()
method writes some data to the stream, and calls the supplied callback
once the data has been fully handled. If an error occurs, the callback
will be called with the error as its first argument. The callback
is called asynchronously and before 'error'
is emitted.
The return value is true
if the internal buffer is less than the highWaterMark
configured when the stream was created after admitting chunk
. If false
is returned, further attempts to write data to the stream should stop until the 'drain'
event is emitted.
While a stream is not draining, calls to write()
will buffer chunk
, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the 'drain'
event will be emitted. Once write()
returns false, do not write more chunks until the 'drain'
event is emitted. While calling write()
on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability.
Writing data while the stream is not draining is particularly problematic for a Transform
, because the Transform
streams are paused by default until they are piped or a 'data'
or 'readable'
event handler is added.
If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a Readable
and use pipe. However, if calling write()
is preferred, it is possible to respect backpressure and avoid memory issues using the 'drain'
event:
function write(data, cb) {
if (!stream.write(data)) {
stream.once('drain', cb);
} else {
process.nextTick(cb);
}
}
// Wait for cb to be called before doing any other write.
write('hello', () => {
console.log('Write completed, do more writes now.');
});
A Writable
stream in object mode will always ignore the encoding
argument.
Optional data to write. For streams not operating in object mode, chunk
must be a {string}, {Buffer}, {TypedArray} or {DataView}. For object mode streams, chunk
may be any JavaScript value other than null
.
The encoding, if chunk
is a string.
Callback for when this chunk of data is flushed.
false
if the stream wishes for the calling code to wait for the 'drain'
event to be emitted before continuing to write additional data; otherwise true
.
Listens once to the abort
event on the provided signal
.
Listening to the abort
event on abort signals is unsafe and may lead to resource leaks since another third party with the signal can call e.stopImmediatePropagation()
. Unfortunately Node.js cannot change this since it would violate the web standard. Additionally, the original API makes it easy to forget to remove listeners.
This API allows safely using AbortSignal
s in Node.js APIs by solving these two issues by listening to the event such that stopImmediatePropagation
does not prevent the listener from running.
Returns a disposable so that it may be unsubscribed from more easily.
import { addAbortListener } from 'node:events';
function example(signal) {
let disposable;
try {
signal.addEventListener('abort', (e) => e.stopImmediatePropagation());
disposable = addAbortListener(signal, (e) => {
// Do something when signal is aborted.
});
} finally {
disposable?.[Symbol.dispose]();
}
}
Disposable that removes the abort
listener.
A utility method for creating a Writable
from a web WritableStream
.
Returns a copy of the array of listeners for the event named eventName
.
For EventEmitter
s this behaves exactly the same as calling .listeners
on the emitter.
For EventTarget
s this is the only way to get the event listeners for the event target. This is useful for debugging and diagnostic purposes.
import { getEventListeners, EventEmitter } from 'node:events';
{
const ee = new EventEmitter();
const listener = () => console.log('Events are fun');
ee.on('foo', listener);
console.log(getEventListeners(ee, 'foo')); // [ [Function: listener] ]
}
{
const et = new EventTarget();
const listener = () => console.log('Events are fun');
et.addEventListener('foo', listener);
console.log(getEventListeners(et, 'foo')); // [ [Function: listener] ]
}
Returns the currently set max amount of listeners.
For EventEmitter
s this behaves exactly the same as calling .getMaxListeners
on the emitter.
For EventTarget
s this is the only way to get the max event listeners for the event target. If the number of event handlers on a single EventTarget exceeds the max set, the EventTarget will print a warning.
import { getMaxListeners, setMaxListeners, EventEmitter } from 'node:events';
{
const ee = new EventEmitter();
console.log(getMaxListeners(ee)); // 10
setMaxListeners(11, ee);
console.log(getMaxListeners(ee)); // 11
}
{
const et = new EventTarget();
console.log(getMaxListeners(et)); // 10
setMaxListeners(11, et);
console.log(getMaxListeners(et)); // 11
}
import { on, EventEmitter } from 'node:events';
import process from 'node:process';
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
});
for await (const event of on(ee, 'foo')) {
// The execution of this inner block is synchronous and it
// processes one event at a time (even with await). Do not use
// if concurrent execution is required.
console.log(event); // prints ['bar'] [42]
}
// Unreachable here
Returns an AsyncIterator
that iterates eventName
events. It will throw if the EventEmitter
emits 'error'
. It removes all listeners when exiting the loop. The value
returned by each iteration is an array composed of the emitted event arguments.
An AbortSignal
can be used to cancel waiting on events:
import { on, EventEmitter } from 'node:events';
import process from 'node:process';
const ac = new AbortController();
(async () => {
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
});
for await (const event of on(ee, 'foo', { signal: ac.signal })) {
// The execution of this inner block is synchronous and it
// processes one event at a time (even with await). Do not use
// if concurrent execution is required.
console.log(event); // prints ['bar'] [42]
}
// Unreachable here
})();
process.nextTick(() => ac.abort());
Use the close
option to specify an array of event names that will end the iteration:
import { on, EventEmitter } from 'node:events';
import process from 'node:process';
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
ee.emit('close');
});
for await (const event of on(ee, 'foo', { close: ['close'] })) {
console.log(event); // prints ['bar'] [42]
}
// the loop will exit after 'close' is emitted
console.log('done'); // prints 'done'
An AsyncIterator
that iterates eventName
events emitted by the emitter
import { on, EventEmitter } from 'node:events';
import process from 'node:process';
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
});
for await (const event of on(ee, 'foo')) {
// The execution of this inner block is synchronous and it
// processes one event at a time (even with await). Do not use
// if concurrent execution is required.
console.log(event); // prints ['bar'] [42]
}
// Unreachable here
Returns an AsyncIterator
that iterates eventName
events. It will throw if the EventEmitter
emits 'error'
. It removes all listeners when exiting the loop. The value
returned by each iteration is an array composed of the emitted event arguments.
An AbortSignal
can be used to cancel waiting on events:
import { on, EventEmitter } from 'node:events';
import process from 'node:process';
const ac = new AbortController();
(async () => {
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
});
for await (const event of on(ee, 'foo', { signal: ac.signal })) {
// The execution of this inner block is synchronous and it
// processes one event at a time (even with await). Do not use
// if concurrent execution is required.
console.log(event); // prints ['bar'] [42]
}
// Unreachable here
})();
process.nextTick(() => ac.abort());
Use the close
option to specify an array of event names that will end the iteration:
import { on, EventEmitter } from 'node:events';
import process from 'node:process';
const ee = new EventEmitter();
// Emit later on
process.nextTick(() => {
ee.emit('foo', 'bar');
ee.emit('foo', 42);
ee.emit('close');
});
for await (const event of on(ee, 'foo', { close: ['close'] })) {
console.log(event); // prints ['bar'] [42]
}
// the loop will exit after 'close' is emitted
console.log('done'); // prints 'done'
An AsyncIterator
that iterates eventName
events emitted by the emitter
Creates a Promise
that is fulfilled when the EventEmitter
emits the given event or that is rejected if the EventEmitter
emits 'error'
while waiting. The Promise
will resolve with an array of all the arguments emitted to the given event.
This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error'
event semantics and does not listen to the 'error'
event.
import { once, EventEmitter } from 'node:events';
import process from 'node:process';
const ee = new EventEmitter();
process.nextTick(() => {
ee.emit('myevent', 42);
});
const [value] = await once(ee, 'myevent');
console.log(value);
const err = new Error('kaboom');
process.nextTick(() => {
ee.emit('error', err);
});
try {
await once(ee, 'myevent');
} catch (err) {
console.error('error happened', err);
}
The special handling of the 'error'
event is only used when events.once()
is used to wait for another event. If events.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:
import { EventEmitter, once } from 'node:events';
const ee = new EventEmitter();
once(ee, 'error')
.then(([err]) => console.log('ok', err.message))
.catch((err) => console.error('error', err.message));
ee.emit('error', new Error('boom'));
// Prints: ok boom
An AbortSignal
can be used to cancel waiting for the event:
import { EventEmitter, once } from 'node:events';
const ee = new EventEmitter();
const ac = new AbortController();
async function foo(emitter, event, signal) {
try {
await once(emitter, event, { signal });
console.log('event emitted!');
} catch (error) {
if (error.name === 'AbortError') {
console.error('Waiting for the event was canceled!');
} else {
console.error('There was an error', error.message);
}
}
}
foo(ee, 'foo', ac.signal);
ac.abort(); // Abort waiting for the event
ee.emit('foo'); // Prints: Waiting for the event was canceled!
Creates a Promise
that is fulfilled when the EventEmitter
emits the given event or that is rejected if the EventEmitter
emits 'error'
while waiting. The Promise
will resolve with an array of all the arguments emitted to the given event.
This method is intentionally generic and works with the web platform EventTarget interface, which has no special'error'
event semantics and does not listen to the 'error'
event.
import { once, EventEmitter } from 'node:events';
import process from 'node:process';
const ee = new EventEmitter();
process.nextTick(() => {
ee.emit('myevent', 42);
});
const [value] = await once(ee, 'myevent');
console.log(value);
const err = new Error('kaboom');
process.nextTick(() => {
ee.emit('error', err);
});
try {
await once(ee, 'myevent');
} catch (err) {
console.error('error happened', err);
}
The special handling of the 'error'
event is only used when events.once()
is used to wait for another event. If events.once()
is used to wait for the 'error'
event itself, then it is treated as any other kind of event without special handling:
import { EventEmitter, once } from 'node:events';
const ee = new EventEmitter();
once(ee, 'error')
.then(([err]) => console.log('ok', err.message))
.catch((err) => console.error('error', err.message));
ee.emit('error', new Error('boom'));
// Prints: ok boom
An AbortSignal
can be used to cancel waiting for the event:
import { EventEmitter, once } from 'node:events';
const ee = new EventEmitter();
const ac = new AbortController();
async function foo(emitter, event, signal) {
try {
await once(emitter, event, { signal });
console.log('event emitted!');
} catch (error) {
if (error.name === 'AbortError') {
console.error('Waiting for the event was canceled!');
} else {
console.error('There was an error', error.message);
}
}
}
foo(ee, 'foo', ac.signal);
ac.abort(); // Abort waiting for the event
ee.emit('foo'); // Prints: Waiting for the event was canceled!
import { setMaxListeners, EventEmitter } from 'node:events';
const target = new EventTarget();
const emitter = new EventEmitter();
setMaxListeners(5, target, emitter);
A non-negative number. The maximum number of listeners per EventTarget
event.
Zero or more {EventTarget} or {EventEmitter} instances. If none are specified, n
is set as the default max for all newly created {EventTarget} and {EventEmitter} objects.
A utility method for creating a web WritableStream
from a Writable
.