Autobase is used to automatically rebase multiple causally-linked Hypercores into a single, linearized Hypercore. The output of an Autobase is 'just a Hypercore', which means it can be used to transform higher-level data structures (like Hyperbee) into multiwriter data structures with minimal additional work.
Although Autobase is still under development, it finds application in many active projects. Keet rooms, for example, are powered by Autobase! This is a testament to the potential of Autobase, and we are excited to see what else it can achieve.


Install with npm:
npm install autobase
An Autobase is constructed from a known set of trusted input Hypercores. Authorizing these inputs is outside of the scope of Autobase -- this module is unopinionated about trust and assumes it comes from another channel.


const base = new Autobase([options])

Creates a new Autobase from a set of input/output Hypercores
The following table describes the properties of the optional options object.
The list of Hypercores for Autobase to linearize
An optional list of output Hypercores containing linearized views
The Hypercore that will be written to in base.append operations
A writable Hypercore that linearized views will be persisted into
Create a linearized view (base.view) immediately
Create a linearized view (base.view) immediately using this apply function
base.view.get calls will return node values only instead of full nodes



The list of input Hypercores.


The list of output Hypercores containing persisted linearized views.


If non-null, this Hypercore will be appended to in base.append operations.


If non-null base.view will be persisted into this Hypercore.


A Boolean indicating if base.view has been created.
See the linearized views section for details about the apply option.
Prior to calling base.start(), base.view will be null.


const clock = base.clock()

Returns a Map containing the latest lengths for all Autobase inputs.
The Map has the form: (hex-encoded-key) -> (Hypercore length)

await Autobase.isAutobase(core)

Returns true if core is an Autobase input or an output.

await base.append(value, [clock], [input])

Append a new value to the autobase.
  • clock: The causal clock defaults to base.latest.

const clock = await base.latest([input1, input2, ...])

Generate a causal clock linking the latest entries of each input.
latest will update the input Hypercores (input.update()) prior to returning the clock.
You generally will not need to use this, and can instead just use append with the default clock:
await base.append('hello world')

await base.addInput(input)

Adds a new input Hypercore.
  • input must either be a fresh Hypercore, or a Hypercore that has previously been used as an Autobase input.

await base.removeInput(input)

Removes an input Hypercore.
  • input must be a Hypercore that is currently an input.
Removing an input, and then subsequently linearizing the Autobase into an existing output, could result in a large truncation operation on that output -- this is effectively 'purging' that input entirely.
Future releases will see the addition of 'soft removal', which will freeze an input at a specific length, and not process blocks past that length, while still preserving that input's history in linearized views. For most applications, soft removal matches the intuition behind 'removing a user'.

await base.addOutput(output)

Adds a new output Hypercore.
  • output must be either a fresh Hypercore or a Hypercore that was previously used as an Autobase output.
If base.outputs is not empty, Autobase will do 'remote linearizing': base.view.update() will treat these outputs as the 'trunk', minimizing the amount of local re-processing they need to do during updates.

await base.removeOutput(output)

Removes an output Hypercore. output can be either a Hypercore or a Hypercore key.
  • output must be a Hypercore, or a Hypercore key, that is currently an output (in base.outputs).


In order to generate shareable linearized views, Autobase must first be able to generate a deterministic, causal ordering over all the operations in its input Hypercores.
Every input node contains embedded causal information (a vector clock) linking it to previous nodes. By default, when a node is appended without additional options (i.e., base.append('hello')), Autobase will embed a clock containing the latest known lengths of all other inputs.
Using the vector clocks in the input nodes, Autobase can generate two types of streams:

Causal Streams

Causal streams start at the heads (the last blocks) of all inputs, walk backward and yield nodes with a deterministic ordering (based on both the clock and the input key) such that anybody who regenerates this stream will observe the same ordering, given the same inputs.
They should fail in the presence of unavailable nodes -- the deterministic ordering ensures that any indexer will process input nodes in the same order.
The simplest kind of linearized view (const view = base.linearize()), is just a Hypercore containing the results of a causal stream in reversed order (block N in the index will not be causally dependent on block N+1).

const stream = base.createCausalStream()

Generate a Readable stream of input blocks with deterministic, causal ordering.
Any two users who create an Autobase with the same set of inputs, and the same lengths (i.e., both users have the same initial states), will produce identical causal streams.
If an input node is causally-dependent on another node that is not available, the causal stream will not proceed past that node, as this would produce inconsistent output.

Read Streams

Similar to Hypercore.createReadStream(), this stream starts at the beginning of each input, and does not guarantee the same deterministic ordering as the causal stream. Unlike causal streams, which are used mainly for indexing, read streams can be used to observe updates. And since they move forward in time, they can be live.

const stream = base.createReadStream([options])

Generate a Readable stream of input blocks, from earliest to latest.
Unlike createCausalStream, the ordering of createReadStream is not deterministic. The read stream only gives you the guarantee that every node it yields will not be causally-dependent on any node yielded later.
Read streams have a public property checkpoint, which can be used to create new read streams that resume from the checkpoint's position:
const stream1 = base.createReadStream()
// Do something with stream1 here
const stream2 = base.createReadStream({ checkpoint: stream1.checkpoint }) // Resume from stream1.checkpoint
createReadStream can be passed two custom async hooks:
  • onresolve: Called when an unsatisfied node (a node that links to an unknown input) is encountered. Can be used to add inputs to the Autobase dynamically.
    • Returning true indicates that you added new inputs to the Autobase, and so the read stream should begin processing those inputs.
    • Returning false indicates that you did not resolve the missing links, and so the node should be yielded immediately as is.
  • onwait: Called after each node is yielded. Can be used to add inputs to the Autobase dynamically.
options include:
Enable live mode (the stream will continuously yield new nodes)
When in live mode, start at the latest clock instead of the earliest
A sync map function
(node) => node
Resume from where a previous read stream left off (readStream.checkpoint)
If false, the read stream will only yield previously-downloaded blocks
A resolve hook (described above)
async (node) => true | false
A wait hook (described above)
async (node) => undefined

Linearized Views

Autobase is designed for computing and sharing linearized views over many input Hypercores. A linearized view is a 'merged' view over the inputs, giving you a way of interacting with the N input Hypercores as though it were a single, combined Hypercore.
These views, instances of the LinearizedView class, in many ways look and feel like normal Hypercores. They support get, update, and length operations.
By default, a view is just a persisted version of an Autobase's causal stream, saved into a Hypercore. But you can do a lot more with them: by passing a function into linearize's apply option, you can define your own indexing strategies.
Linearized views are incredibly powerful as they can be persisted to a Hypercore using the new truncate API added in Hypercore 10. This means that peers querying a multiwriter data structure don't need to read in all changes and apply them themself. Instead, they can start from an existing view that's shared by another peer. If that view is missing indexing any data from inputs, Autobase will create a 'view over the remote view', applying only the changes necessary to bring the remote view up-to-date. The best thing is that this all happens automatically for you!

Customizing Views with apply

The default linearized view is just a persisted causal stream -- input nodes are recorded into an output Hypercore in causal order, with no further modifications. This minimally-processed view is useful on its own for applications that don't follow an event-sourcing pattern (i.e., chat), but most use cases involve processing operations in the inputs into indexed representations.
To support indexing, base.start can be provided with an apply function that's passed batches of input nodes during rebasing, and can choose what to store in the output. Inside apply, the view can be directly mutated through the view.append method, and these mutations will be batched when the call exits.
The simplest apply function is just a mapper, a function that modifies each input node and saves it into the view in a one-to-one fashion. Here's an example that uppercases String inputs, and saves the resulting view into an output Hypercore:
async apply (batch) {
batch ={ value }) => Buffer.from(value.toString('utf-8').toUpperCase(), 'utf-8'))
await view.append(batch)
// After base.start, the linearized view is available as a property on the Autobase
await base.view.update()
More sophisticated indexing might require multiple appends per input node, or reading from the view during apply -- both are perfectly valid. The multiwriter Hyperbee example shows how this apply pattern can be used to build Hypercore-based indexing data structures using this approach.

View Creation

base.start({ apply, unwrap } = {})

Creates a new linearized view, and sets it on base.view. The view mirrors the Hypercore API wherever possible, meaning it can be used where ever you would normally use a Hypercore.
You can either call base.start manually when you want to start using base.view, or you can pass either apply or autostart options to the Autobase constructor. If these constructor options are present, Autobase will start immediately.
If you choose to call base.start manually, it must only be called once.
options include:
Set this to auto unwrap the gets to only return .value
The apply function described above
(batch) => {}


The status of the last linearize operation.
Returns an object of the form { added: N, removed: M } where:
  • added indicates how many nodes were appended to the output during the linearization
  • removed indicates how many nodes were truncated from the output during the linearization


The length of the view. Similar to hypercore.length.

await view.update()

Make sure the view is up to date.

const entry = await view.get(idx, [options])

Get an entry from the view. If you set unwrap to true, it returns entry.value. Otherwise, it returns an entry similar to this:
clock, // the causal clock this entry was created at
value // the value that is stored here

await view.append([blocks])

This operation can only be performed inside the apply function.