Skip to main content
WritableBase - node__stream.d.ts - Node documentation
class WritableBase
implements [NodeJS.WritableStream]
extends Stream

Usage in Deno

```typescript import { WritableBase } from "node:node__stream.d.ts"; ```

Constructors

new
WritableBase(opts?: WritableOptions)

Properties

readonly
closed: boolean
Is `true` after `'close'` has been emitted.
destroyed: boolean
Is `true` after `writable.destroy()` has been called.
readonly
errored: Error | null
Returns error if the stream has been destroyed with an error.
readonly
writable: boolean
Is `true` if it is safe to call `writable.write()`, which means the stream has not been destroyed, errored, or ended.
readonly
writableCorked: number
Number of times `writable.uncork()` needs to be called in order to fully uncork the stream.
readonly
writableEnded: boolean
Is `true` after `writable.end()` has been called. This property does not indicate whether the data has been flushed, for this use `writable.writableFinished` instead.
readonly
writableFinished: boolean
Is set to `true` immediately before the `'finish'` event is emitted.
readonly
writableHighWaterMark: number
Return the value of `highWaterMark` passed when creating this `Writable`.
readonly
writableLength: number
This property contains the number of bytes (or objects) in the queue ready to be written. The value provides introspection data regarding the status of the `highWaterMark`.
readonly
writableNeedDrain: boolean
Is `true` if the stream's buffer has been full and stream will emit `'drain'`.
readonly
writableObjectMode: boolean
Getter for the property `objectMode` of a given `Writable` stream.

Methods

abstract
_construct(callback: (error?: Error | null) => void): void
_destroy(
error: Error | null,
callback: (error?: Error | null) => void,
): void
_final(callback: (error?: Error | null) => void): void
_write(
chunk: any,
encoding: BufferEncoding,
callback: (error?: Error | null) => void,
): void
abstract
_writev(
chunks: Array<{ chunk: any; encoding: BufferEncoding; }>,
callback: (error?: Error | null) => void,
): void
addListener(
event: "close",
listener: () => void,
): this
Event emitter The defined events on documents including: 1. close 2. drain 3. error 4. finish 5. pipe 6. unpipe
addListener(
event: "drain",
listener: () => void,
): this
addListener(
event: "error",
listener: (err: Error) => void,
): this
addListener(
event: "finish",
listener: () => void,
): this
addListener(
event: "pipe",
listener: (src: Readable) => void,
): this
addListener(
event: "unpipe",
listener: (src: Readable) => void,
): this
addListener(
event: string | symbol,
listener: (...args: any[]) => void,
): this
cork(): void
The `writable.cork()` method forces all written data to be buffered in memory. The buffered data will be flushed when either the uncork or end methods are called. The primary intent of `writable.cork()` is to accommodate a situation in which several small chunks are written to the stream in rapid succession. Instead of immediately forwarding them to the underlying destination, `writable.cork()` buffers all the chunks until `writable.uncork()` is called, which will pass them all to `writable._writev()`, if present. This prevents a head-of-line blocking situation where data is being buffered while waiting for the first small chunk to be processed. However, use of `writable.cork()` without implementing `writable._writev()` may have an adverse effect on throughput. See also: `writable.uncork()`, `writable._writev()`.
destroy(error?: Error): this
Destroy the stream. Optionally emit an `'error'` event, and emit a `'close'` event (unless `emitClose` is set to `false`). After this call, the writable stream has ended and subsequent calls to `write()` or `end()` will result in an `ERR_STREAM_DESTROYED` error. This is a destructive and immediate way to destroy a stream. Previous calls to `write()` may not have drained, and may trigger an `ERR_STREAM_DESTROYED` error. Use `end()` instead of destroy if data should flush before close, or wait for the `'drain'` event before destroying the stream. Once `destroy()` has been called any further calls will be a no-op and no further errors except from `_destroy()` may be emitted as `'error'`. Implementors should not override this method, but instead implement `writable._destroy()`.
emit(event: "close"): boolean
emit(event: "drain"): boolean
emit(
event: "error",
err: Error,
): boolean
emit(event: "finish"): boolean
emit(
event: "pipe",
src: Readable,
): boolean
emit(
event: "unpipe",
src: Readable,
): boolean
emit(
event: string | symbol,
...args: any[],
): boolean
end(cb?: () => void): this
Calling the `writable.end()` method signals that no more data will be written to the `Writable`. The optional `chunk` and `encoding` arguments allow one final additional chunk of data to be written immediately before closing the stream. Calling the write method after calling end will raise an error. ```js // Write 'hello, ' and then end with 'world!'. import fs from 'node:fs'; const file = fs.createWriteStream('example.txt'); file.write('hello, '); file.end('world!'); // Writing more now is not allowed! ```
end(
chunk: any,
cb?: () => void,
): this
end(
chunk: any,
encoding: BufferEncoding,
cb?: () => void,
): this
on(
event: "close",
listener: () => void,
): this
on(
event: "drain",
listener: () => void,
): this
on(
event: "error",
listener: (err: Error) => void,
): this
on(
event: "finish",
listener: () => void,
): this
on(
event: "pipe",
listener: (src: Readable) => void,
): this
on(
event: "unpipe",
listener: (src: Readable) => void,
): this
on(
event: string | symbol,
listener: (...args: any[]) => void,
): this
once(
event: "close",
listener: () => void,
): this
once(
event: "drain",
listener: () => void,
): this
once(
event: "error",
listener: (err: Error) => void,
): this
once(
event: "finish",
listener: () => void,
): this
once(
event: "pipe",
listener: (src: Readable) => void,
): this
once(
event: "unpipe",
listener: (src: Readable) => void,
): this
once(
event: string | symbol,
listener: (...args: any[]) => void,
): this
prependListener(
event: "close",
listener: () => void,
): this
prependListener(
event: "drain",
listener: () => void,
): this
prependListener(
event: "error",
listener: (err: Error) => void,
): this
prependListener(
event: "finish",
listener: () => void,
): this
prependListener(
event: "pipe",
listener: (src: Readable) => void,
): this
prependListener(
event: "unpipe",
listener: (src: Readable) => void,
): this
prependListener(
event: string | symbol,
listener: (...args: any[]) => void,
): this
prependOnceListener(
event: "close",
listener: () => void,
): this
prependOnceListener(
event: "drain",
listener: () => void,
): this
prependOnceListener(
event: "error",
listener: (err: Error) => void,
): this
prependOnceListener(
event: "finish",
listener: () => void,
): this
prependOnceListener(
event: "pipe",
listener: (src: Readable) => void,
): this
prependOnceListener(
event: "unpipe",
listener: (src: Readable) => void,
): this
prependOnceListener(
event: string | symbol,
listener: (...args: any[]) => void,
): this
removeListener(
event: "close",
listener: () => void,
): this
removeListener(
event: "drain",
listener: () => void,
): this
removeListener(
event: "error",
listener: (err: Error) => void,
): this
removeListener(
event: "finish",
listener: () => void,
): this
removeListener(
event: "pipe",
listener: (src: Readable) => void,
): this
removeListener(
event: "unpipe",
listener: (src: Readable) => void,
): this
removeListener(
event: string | symbol,
listener: (...args: any[]) => void,
): this
setDefaultEncoding(encoding: BufferEncoding): this
The `writable.setDefaultEncoding()` method sets the default `encoding` for a `Writable` stream.
uncork(): void
The `writable.uncork()` method flushes all data buffered since cork was called. When using `writable.cork()` and `writable.uncork()` to manage the buffering of writes to a stream, defer calls to `writable.uncork()` using `process.nextTick()`. Doing so allows batching of all `writable.write()` calls that occur within a given Node.js event loop phase. ```js stream.cork(); stream.write('some '); stream.write('data '); process.nextTick(() => stream.uncork()); ``` If the `writable.cork()` method is called multiple times on a stream, the same number of calls to `writable.uncork()` must be called to flush the buffered data. ```js stream.cork(); stream.write('some '); stream.cork(); stream.write('data '); process.nextTick(() => { stream.uncork(); // The data will not be flushed until uncork() is called a second time. stream.uncork(); }); ``` See also: `writable.cork()`.
write(
chunk: any,
callback?: (error:
Error
| null
| undefined
) => void
,
): boolean
The `writable.write()` method writes some data to the stream, and calls the supplied `callback` once the data has been fully handled. If an error occurs, the `callback` will be called with the error as its first argument. The `callback` is called asynchronously and before `'error'` is emitted. The return value is `true` if the internal buffer is less than the `highWaterMark` configured when the stream was created after admitting `chunk`. If `false` is returned, further attempts to write data to the stream should stop until the `'drain'` event is emitted. While a stream is not draining, calls to `write()` will buffer `chunk`, and return false. Once all currently buffered chunks are drained (accepted for delivery by the operating system), the `'drain'` event will be emitted. Once `write()` returns false, do not write more chunks until the `'drain'` event is emitted. While calling `write()` on a stream that is not draining is allowed, Node.js will buffer all written chunks until maximum memory usage occurs, at which point it will abort unconditionally. Even before it aborts, high memory usage will cause poor garbage collector performance and high RSS (which is not typically released back to the system, even after the memory is no longer required). Since TCP sockets may never drain if the remote peer does not read the data, writing a socket that is not draining may lead to a remotely exploitable vulnerability. Writing data while the stream is not draining is particularly problematic for a `Transform`, because the `Transform` streams are paused by default until they are piped or a `'data'` or `'readable'` event handler is added. If the data to be written can be generated or fetched on demand, it is recommended to encapsulate the logic into a `Readable` and use pipe. However, if calling `write()` is preferred, it is possible to respect backpressure and avoid memory issues using the `'drain'` event: ```js function write(data, cb) { if (!stream.write(data)) { stream.once('drain', cb); } else { process.nextTick(cb); } } // Wait for cb to be called before doing any other write. write('hello', () => { console.log('Write completed, do more writes now.'); }); ``` A `Writable` stream in object mode will always ignore the `encoding` argument.
write(
chunk: any,
encoding: BufferEncoding,
callback?: (error:
Error
| null
| undefined
) => void
,
): boolean