Rust

Introduction

This documentation covers Kaspa WASM SDK - Rust infrastructure (Rusty Kaspa) WASM bindings that allow Rust code to be used from within JavaScript/TypeScript environments.

Please contribute! This mdbook is very easy to edit. If you would like to suggest any changes or add anything, please check the Contributing page.

Technologies

Rust WASM JavaScript TypeScript NodeJS NWJS


 

This project is built using multi-platform Rust crates (libraries). The Rust framework is exposed to JavaScript via WASM, while the Rust framework itself can be built to run on any native platform (i.e. for Windows, Linux and MacOS).

WASM32 is compatible with all major browsers, Node.js as well as environments such as NWJS and Electron. It is also compatible with Chrome Extension manifest version 3, which allows it to be used in Chrome browser extensions.

WASM SDK

This WASM SDK is generated directly from the Rusty Kaspa codebase, where Rust functions are compiled into WebAssembly and then are offered in the JavaScript or TypeScript environments as native JavaScript functions.

Since the WASM SDK has an integrated WebSocket support, it offers RPC connectivity turning all RPC calls into async function calls in JavaScript (in Rust these functions are also async).

The WASM SDK offers bindings for Transaction generation, address management, transaction signing as well as various helper classes for UTXO management.

Documentation

TypeScript and Rust API documentation is available at the following URLs:

Discord

You can find help on the Kaspa Discord server, in the #development channel.

Redistributables

WASM redistributables are available prebuilt for web browsers and for nodejs. The entire framework can also be built from Rusty Kaspa sources (into WASM) or used within Rust directly for Rust-based application integration.

You can currently download the latest version of the WASM SDK from: https://aspectron.com/en/projects/kaspa-wasm.html

Development releases

Building from source & examples

Additional WASM SDK information can be found in the WASM SDK README

Git

This SDK is a part of a larger Rusty Kaspa framework available at https://github.com/kaspanet/rusty-kaspa

The following crates implement key functionality exposed by this SDK:

Security Considerations

WASM SDK binaries are built directly from the Rust project codebase. WASM SDK provides all necessary primitives to interop with Kaspa from NodeJs or a browser environments.

Using WASM SDK

TODO: INTEGRITY EXAMPLE FOR LOADING WASM SDK INTO HTML USING <script> TAG

To load WASM SDK, you can use “kaspa” or “kaspa-wasm” NPM modules, however for security-critical applications, you should either build WASM SDK from the Rust source code or obtain prebuilt binaries and embed them into your project.

NPM versioning

For security-centric applications, any 3rd-party JavaScript node module dependencies should be considered not secure due to a multitude of attack vectors, such as code injection vulnerabilities.

If you have no choice and you absolutely need to use something from NPM, review all dependencies manually, make sure to set the full version of the dependency, including the patch number. This helps prevent potential dependency code updates when new versions of dependencies are published on NPM.

Manual review of all dependencies and direct embedding of said dependencies into your project or a library your project relies on is another great option to reduce exposure to dependency changes.

Serving

It is generally desirable to serve WASM libraries, as well as other cryptocurrency application components, from the server controlled by you. Serving

Usging subresource integrity

When loading WASM or your own scripts via the <script> tag, you can specify an integrity hash of the target resources.

<script
  src="https://example.com/example-framework.js"
  integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"
  crossorigin="anonymous"></script>

https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity#subresource_integrity_with_the_script_element

Integration Overview

This section provides a basic overview of Kaspa from the standpoint of integration.

Wallet Address Derivation

Address HD Derivation Paths used by Kaspa wallets are BIP-0032 compatible, using the following derivation path:

m / purpose' / coin_type' / account' / change / address_index
  • Single signers use purpose value 44'
  • Multi-signers use purpose value 45'
  • The coin type value is 111111'
  • KDX and kaspanet.io wallets used a different derivation path of 972', which is now deprecated.

Transactions

Transaction constraints are measured using a custom mass units as well as serialized byte sizes. For detailed overview please see the Transactions section.

RPC and UTXO aggregation

To get access to UTXOs, you need to use getUtxosByAddress() API call and register for UtxosChangedNotification updates. To enable this functionality, the kaspad node needs to be started with UTXO index enabled.

DAA Score

DAA (Difficulty Adjustment Algorithm) score is used as time measurement units in Kaspa (similar to "block height" in Bitcoin).

WIF

Kaspa does not support WIF (Wallet Import Format) as Kaspa wallets use XPrv or a seed/mnemonic for wallet import and export.

Other Common Parameters

Below, you will find some parameters that are commonly used in multi-coin wallets:

name = Kaspa
unit = KAS

// Derivation parameters
SingleSignerPurpose = 44
MultiSigPurpose = 45
CoinType = 111111

// KaspaMainnetPrivate is the version that is used for
// kaspa mainnet bip32 private extended keys.
// Ecnodes to xprv in base58.
const KaspaMainnetPrivate = [4] byte {
    0x03,
    0x8f,
    0x2e,
    0xf4,
}

// KaspaMainnetPublic is the version that is used for
// kaspa mainnet bip32 public extended keys.
// Encodes to kpub in base58.
const KaspaMainnetPublic = [4] byte {
    0x03,
    0x8f,
    0x33,
    0x2e,
}

const (
    // PubKey addresses always have the version byte set to 0.
    pubKeyAddrID = 0x00

    // PubKey addresses always have the version byte set to 1.
    pubKeyECDSAAddrID = 0x01

    // ScriptHash addresses always have the version byte set to 8.
    scriptHashAddrID = 0x08
)

// Map from strings to Bech32 address prefix constants for parsing purposes.
const stringsToBech32Prefixes = map[string] Bech32Prefix {
    "kaspa":     Bech32PrefixKaspa,
    "kaspadev":  Bech32PrefixKaspaDev,
    "kaspatest": Bech32PrefixKaspaTest,
    "kaspasim":  Bech32PrefixKaspaSim,
}

Running

There are currently two full-node implementations for Kaspa, the original, developed in Golang and a new one called 'Rusty Kaspa', developed in Rust. The Rust implementation has higher performance and it is expected that the Golang implementation will be deprecated.

Rust WASM framework can connect to the Rusty Kaspa node directly or to the Golang node via a Proxy.

Running an RPC proxy

RPC proxy is required for Golang node implementation only.

Download the latest binaries from the WASM release page at https://aspectron.com/projects/kaspa-wasm.html

Simply run kaspa-wrpc-proxy. By default, it will attempt to connect to the kaspad node on the local network.

Building from source

Please follow up to date build instructions at https://github.com/kaspanet/rusty-kaspa/blob/master/README.md

If your envifonment is already setup, you can build the proxy as follows:

cd rpc/wrpc/proxy
cargo build --release
cargo run --release

The resulting binary will be located in the workspace target/ directory.

Docker

Rusty Kaspa images based on Alpine linux can be found here:

  • Docker build scripts: https://github.com/supertypo/docker-rusty-kaspa
  • Published images: https://hub.docker.com/r/supertypo/rusty-kaspad

UTXO Index

UTXO Index is an auxiliary database that enables the Kaspa node to perform additional tracking of transaction addresses. This allows you to setup notifications for when a new transaction matching your addresses has been detected.

If UTXO Index is not enabled, RPC calls requesting UTXO By Addresses information will result in an error.

To enable UTXO Index run the node with the --utxoindex command-line argument.

Integrating

Loading into a Web App

Loading in a Web Browser requires an import of the JavaScript module and an async await for a bootstrap handler as follows:

<html>
    <head>
        <script type="module">
            import * as kaspa_wasm from './kaspa/kaspa-wasm.js';
            (async () => {
                const kaspa = await kaspa_wasm.default('./kaspa/kaspa-wasm_bg.wasm');
            })();
        </script>
    </head>
    <body></body>
</html>

Loading into a Node.js App

For Node.js, Kaspa WASM SDK is available as a regular Node.js module that can be loaded using require().

// W3C WebSocket module shim
globalThis.WebSocket = require('websocket').w3cwebsocket;

let {RpcClient,Encoding,init_console_panic_hook,defer} = require('./kaspa-rpc');
// init_console_panic_hook();

let URL = "ws://127.0.0.1:17110";
let rpc = new RpcClient(Encoding.Borsh,URL);

(async () => {
    await rpc.connect();
    let info = await rpc.getInfo();
    console.log(info);

    await rpc.disconnect();
})();

The Node.js WebSocket shim

To use WASM RPC client in the Node.js environment, you need to introduce a W3C WebSocket-compatible object before loading the WASM32 library. You can use any Node.js module that exposes a W3C-compatible WebSocket implementation. Two of such modules are WebSocket (provides a custom implementation) and isomorphic-ws (built on top of the ws WebSocket module).

You can use the following shims:

// WebSocket
globalThis.WebSocket = require('websocket').w3cwebsocket;
// isomorphic-ws
globalThis.WebSocket = require('isomorphic-ws');

RPC

Rusty Kaspa integrates support for the following RPC protocols:

  • gRPC (native to Kaspa)
  • WebSocket-framed wRPC/JSON-RPC protocol
  • WebSocket-framed wRPC/Borsh protocol

When using Borsh, server and client should be built from the same source.

gRPC

gRPC connection can be established by any gRPC-capable client and follow for Kaspa gRPC protocol specifications.

wRPC

Protocol encoding for wRPC is configurable within the initialization API or as command line switches in applications such as kaspa-wrpc-proxy or Rusty Kaspa full-node daemon.

Rusty Kaspa framework includes RPC client and server offering creation of wRPC endpoints easily from within Rust as well as from within JavaScript using WASM SDK.

WASM RpcClient

RpcClient Documentation

Example

The following example runs under NodeJS and registers to receive a certain number of notifications, after which it cleanly shuts-down and exits.


We start by declaring a websocket shim needed for the RPC connection in NodeJS, then we load required imports and initialize the RPC client.

// W3C WebSocket module shim
globalThis.WebSocket = require('websocket').w3cwebsocket;

let {RpcClient,Encoding,defer} = require('./kaspa-rpc');

const MAX_NOTIFICATIONS = 10;
let URL = "ws://127.0.0.1:17110";
let rpc = new RpcClient(Encoding.Borsh, URL);

Once we have the RPC client, we call an async connect() function that bloks the async execution until the connection is resolved.

(async () => {
    // ...
    await rpc.connect();
    // ...

Once connected, you can make RPC requests to the Kaspa daemon. RPC functions are always async and currently return pure JavaScript objects. For example getInfo() will return an object as follows:

    let info = await rpc.getInfo();
    console.log(info);
    // {
    //   serverVersion : "0.12.8",
    //   isSynced : false,
    //   ...
    // }

You can also subscribe for event notifications using rpc.notify() function to register a notification handler callback. In this example defer() returns a "deferred promise" that can be resolved manually at a later time.

    let finish = defer();
    let seq = 0;
    // register notification handler
    await rpc.notify(async (op, payload) => {
        console.log(`#${seq} - `,"op:",op,"payload:",payload);
        seq++;
        if (seq == MAX_NOTIFICATIONS) {
            // await rpc.disconnect();
            console.log(`exiting after ${seq} notifications`);
            finish.resolve();
        }
    });

    // test subscription
    console.log("subscribing...");
    await rpc.subscribeDaaScore();

At this point we block on the finish promise, and wait for MAX_NOTIFICATIONS to occur. The notification handler will execute finish.resolve() after MAX_NOTIFICATIONS, allowing the await finish; to resolve and the execution on the primary async pathway to continue.

We then unregister the notification handler to cleanup by calling rpc.notify(null); and rpc.disconnect() to disconnect from the RPC endpoint.

    // wait until notifier signals completion
    await finish;
    // clear notification handler
    await rpc.notify(null);
    // disconnect RPC interface
    await rpc.disconnect();

})();

Transactions

Kaspa transactions are similar to that of Bitcoin, but there are some differences on how Kaspa calculates transaction constraints.

The following section covers transaction constraints such as mass and dust limits, how to calculate them, creation of transactions, monitoring for UTXO changes and signing transactions.

Constraints

Kaspa transactions have the following constraints:

Transaction size metrics are used to calculate transaction mass and to check against dust limits.

Note on serialization

Kaspa is serialization-agnostic - the software infrastructure provides compatibility layers such as gRPC (which uses protobuf under the hood) and WebSockets with JSON or Borsh (binary) serialization.

However, when considering transaction metrics, sizes of different elements must follow some deterministic serialization rules. To address this, Kaspa builds estimates based on it's own internal serialization. The following Transaction Size section describes how transactions are serialized in order to obtain their metrics.

Transaction Size

Transaction size is used to calculate the mass value of the transaction to evaluate it against various network limits. It is also used in the dust calculation algorithm.

How input estimated serialized size is calculated

transaction_estimated_serialized_size() produces the estimated size of a transaction in some serialization. This has to be deterministic, but not necessarily accurate, since it's only used as the size component in the transaction and block mass limit calculation.

#![allow(unused)]

fn main() {
pub fn transaction_estimated_serialized_size(tx: &Transaction) -> u64 {
    let mut size: u64 = 0;
    size += 2; // Tx version (u16)
    size += 8; // Number of inputs (u64)
    let inputs_size: u64 = tx.inputs.iter().map(transaction_input_estimated_serialized_size).sum();
    size += inputs_size;

    size += 8; // number of outputs (u64)
    let outputs_size: u64 = tx.outputs.iter().map(transaction_output_estimated_serialized_size).sum();
    size += outputs_size;

    size += 8; // lock time (u64)
    size += SUBNETWORK_ID_SIZE as u64;
    size += 8; // gas (u64)
    size += HASH_SIZE as u64; // payload hash

    size += 8; // length of the payload (u64)
    size += tx.payload.len() as u64;
    size
}

const HASH_SIZE:usize = 32;

// 36 + 8 + <signature script length> + 8
fn transaction_input_estimated_serialized_size(input: &TransactionInput) -> u64 {
    let mut size = 0;
    size += outpoint_estimated_serialized_size();

    size += 8; // length of signature script (u64)
    size += input.signature_script.len() as u64;

    size += 8; // sequence (u64)
    size
}

// 32 + 4 = 36
const fn outpoint_estimated_serialized_size() -> u64 {
    let mut size: u64 = 0;
    size += HASH_SIZE as u64; // Previous tx ID
    size += 4; // Index (u32)
    size
}

// 8 + 2 + 8 + <script_public_key length>
pub fn transaction_output_estimated_serialized_size(output: &TransactionOutput) -> u64 {
    let mut size: u64 = 0;
    size += 8; // value (u64)
    size += 2; // output.ScriptPublicKey.Version (u16)
    size += 8; // length of script public key (u64)
    size += output.script_public_key.script().len() as u64;
    size
}

}

Dust Outputs

A transaction output is considered dust, if:

#![allow(unused)]
fn main() {
(transaction_output.value * 1000 / (3 * transaction_output_serialized_size))
    < self.config.minimum_relay_transaction_fee
}

config.minimum_relay_transaction_fee specifies the minimum transaction fee for a transaction to be accepted to the mempool and relayed. It is specified in sompi per 1kg (or 1000 grams) of transaction mass.

#![allow(unused)]
fn main() {
pub(crate) const DEFAULT_MINIMUM_RELAY_TRANSACTION_FEE: u64 = 1000;
}

The following functions can be used to check if TransactionOutput is dust.

  • TransactionOutput.isDust()
  • isTransactionOutputDust(transaction_output: TransactionOutput)

Transaction Mass

Transaction mass limits

Transaction mass is a value calculated from the transaction by applying different weights to transaction metrics. These metrics include the mass per transaction byte, mass per script public key byte, and mass per signing operation. Theese costs are network-specific and for the Mainnet have the following parameters:

#![allow(unused)]
fn main() {
pub const MAINNET_PARAMS: Params = Params {
    mass_per_tx_byte: 1,
    mass_per_script_pub_key_byte: 10,
    mass_per_sig_op: 1000,
    ...
}
}

The maximum allowed transaction mass is 100_000 mass.

#![allow(unused)]
fn main() {
const MAXIMUM_STANDARD_TRANSACTION_MASS: u64 = 100_000;
}

Calculating transaction mass

SigOps mass

#![allow(unused)]
fn main() {
let total_sigops: u64 = tx.inputs.iter().map(|input| input.sig_op_count as u64).sum();
let total_sigops_mass = total_sigops * self.mass_per_sig_op;

}

Input mass

one input mass = (input-size * mass_per_tx_byte) + (input-sig-op-count * mass_per_sig_op)

input-size = outpoint-size + u64 (storage for length of signature script) + signature script length + u64 (sequence)

so one input mass = ((36 + 8 + 66* + 8) * 1) + (1 * 1000) = 1118

*here value "66" is not fixed but depends on signature script length; see size += input.signature_script.len() as u64; line in the transaction_input_estimated_serialized_size() function below.

Transaction mass

transaction mass = 
    ( tx-size * mass_per_tx_byte ) + 
    ( total-ouput-script-size * mass_per_script_pub_key_byte ) + 
    ( total-input-sigops * mass_per_sig_op )
#![allow(unused)]
fn main() {
let size = transaction_estimated_serialized_size(tx);
let mass_for_size = size * self.mass_per_tx_byte;
let total_script_public_key_size: u64 = tx
    .outputs
    .iter()
    .map(|output| 2 /* script public key version (u16) */ + output.script_public_key.script().len() as u64)
    .sum();
let total_script_public_key_mass = total_script_public_key_size * self.mass_per_script_pub_key_byte;

let total_sigops: u64 = tx.inputs.iter().map(|input| input.sig_op_count as u64).sum();
let total_sigops_mass = total_sigops * self.mass_per_sig_op;

let transaction_mass = mass_for_size + total_script_public_key_mass + total_sigops_mass
}

Other resources

Mass calculation code can be found here: https://github.com/aspectron/rusty-kaspa/blob/wasm-bindings/consensus/core/src/mass/mod.rs#L59-L89

TODO - change the URL after PR merge to this: https://github.com/kaspanet/rusty-kaspa/blob/master/consensus/core/src/mass/mod.rs

Fee Calculation

While the transaction fee is a single amount, it is comprised of two amounts:

  • Minimum Transaction Fee (determined from the transaction size)
  • Priority fee (determined by the user)

The priority fee is any amount above the minimum transaction fee.

Using SDK

WASM SDK provides a convenience function for fee calculation:

minimumTransactionFee(tx : Transaction, network: NetworkType)

Any extra fee amount in the transaction will be considered as a priority fee.

Recomputing fees

The following is true for any UTXO network where transaction fees depend on the transaction byte size.

If you accumulate a number of UTXOs with the goal of reaching the amount A, the A + fees may become larger than the total of all UTXOs. As a result, you may need to consume another UTXO to satisfy fees (and consuming another UTXO will result in higher fees).

A typical approach is to reserve an amount for fees or fallback if the selected UTXO amounts is unable to accomodate fees and re-select UTXOS for a higher amount.

Calculating fees using mass

The formula for the fee calculation using mass is as follows:

Calculate minimum fee based on transaction mass

MINIMUM_RELAY_TRANSACTION_FEE is in sompi/kg so multiply by mass (which is in grams) and divide by 1000 to get minimum sompis.

#![allow(unused)]
fn main() {
let mut minimum_fee = (transaction_mass * MINIMUM_RELAY_TRANSACTION_FEE) / 1000;

if minimum_fee == 0 {
    minimum_fee = MINIMUM_RELAY_TRANSACTION_FEE;
}

// Set the minimum fee to the maximum possible value if the calculated
// fee is not in the valid range for monetary amounts.
minimum_fee = minimum_fee.min(MAX_SOMPI);
}
#![allow(unused)]
fn main() {
/// DEFAULT_MINIMUM_RELAY_TRANSACTION_FEE specifies the minimum transaction fee for a transaction to be accepted to
/// the mempool and relayed. It is specified in sompi per 1kg (or 1000 grams) of transaction mass.
pub(crate) const DEFAULT_MINIMUM_RELAY_TRANSACTION_FEE: u64 = 1000;
}

Monitoring UTXOs

UTXOs can be obtained via the getUtxosByAddresses method or by registering and listening for UTXO updates via notifications.

Getting UTXOs by Address

let addresses = [
    new Address("kaspatest:qz7ulu4c25dh7fzec9zjyrmlhnkzrg4wmf89q7gzr3gfrsj3uz6xjceef60sd"),
    new Address("kaspatest:qzn3qjzf2nzyd3zj303nk4sgv0aae42v3ufutk5xsxckfels57dxjnltw0jwz",),
];

let utxos = await rpc.getUtxosByAddresses({ addresses });
utxos.forEach((utxo) => {
    console.log(utx.address);
    console.log(utx.outpoint);
    console.log(utx.utxoEntry);
})

Registering for UTXO notifications - TODO

    let addresses = [
        new Address("kaspatest:qz7ulu4c25dh7fzec9zjyrmlhnkzrg4wmf89q7gzr3gfrsj3uz6xjceef60sd"),
        new Address("kaspatest:qzn3qjzf2nzyd3zj303nk4sgv0aae42v3ufutk5xsxckfels57dxjnltw0jwz",),
    ];

    // register notification handler
    await rpc.notify(async (op, payload) => {
        // TODO test
        if op == "NotifyUtxosChanged" {
            // TODO - new UTXO entry ...
        }
    });
    // subscribe addresses for notifications
    await rpc.subscribeUtxosChanged(addresses);
    // ...
    // unsubscribe addresses
    await rpc.unsubscribeUtxosChanged(addresses);
    // reset notification handler
    await rpc.notify(null);

Creation

Get available UTXOs for a given set of addresses

    let addresses = [
        new Address("kaspatest:qz7ulu4c25dh7fzec9zjyrmlhnkzrg4wmf89q7gzr3gfrsj3uz6xjceef60sd")
    ];
    let utxos_by_address = await rpc.getUtxosByAddresses({ addresses });

Create a UTXO collection from the received UTXO set and select UTXOs needed for a transaction

    let utxoSet = UtxoSet.from(utxos_by_address);
    let amount = 1000n;
    let utxo_selection = await utxoSet.select(amount, UtxoOrdering.AscendingAmount);

UtxoSet is a custom collection designed to efficiently handle sorted collections of UTXOs.

Specify destination amounts and create a transaction

    let change_address = new Address("kaspatest:qz7ulu4c25dh7fzec9zjyrmlhnkzrg4wmf89q7gzr3gfrsj3uz6xjceef60sd");
    let output = new Output(
        new Address("kaspatest:qz7ulu4c25dh7fzec9zjyrmlhnkzrg4wmf89q7gzr3gfrsj3uz6xjceef60sd"),
        amount
    );

    let outputs = new Outputs([output])
    let priorityFee = 1500;
    let tx = createTransaction(utxo_selection, outputs, change_address, priorityFee);

Time Locks

  • Locktime: (absolute locktimes)

    • 0                                  No locktime
    • < 500 billion                 Block DAA score
    • >= 500 billion               Unix timestamp (milliseconds)
  • Sequence: (relative locktimes)

    • bit 63:
      • 0 - sequence lock enabled
      • 1 - sequence lock disabled
    • bits 0-31: actual relative locktime, unsigned int32
  • Disable relative time-based timelocks (Ignoring the type flag)

  • Adds special functions to support timelocks when building a script: AddLockTimeNumber, AddSequenceNumber.


Docs: https://github.com/kaspanet/docs/blob/main/Reference/Timelocks.md

BIP-143-like SigHashes for Kaspa

This document outlines a change to the Kaspa protocol similar to Bitcoin's BIP-143 proposal.
The protocol outlined here is very similar to Bitcoin's proposal, with slight variations due to different transaction structure in Kaspa, and the use of Blake2b hash (as opposed to SHA256 in bitcoin).

For motivation and further details see the original BIP-143 proposal: https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki

Specification

SigHashTypes

The SigHashTypes are defined as follows:

	SigHashAll          SigHashType = 0b00000001
	SigHashNone         SigHashType = 0b00000010
	SigHashSingle       SigHashType = 0b00000100
	SigHashAnyOneCanPay SigHashType = 0b10000000

Note this is different from bitcoin where SigHashSingle has the value 0b00000011. This was changed to make SigHashType a true bit-field. In addition, SigHashType is always serialized as a single byte, whilst in bitcoin it is often serialized as a 4-byte uint, and only 1-byte in the signature itself.

SigHash calculation

A new transaction digest algorithm is defined:

Blake2b of the serialization of:
    1. tx.Version (2-bytes unsigned little endian)
    2. previousOutputsHash (32-byte hash)
    3. sequencesHash (32-byte hash)
    4. sigOpCountsHash (32-byte has)
    5. txIn.PreviousOutpoint.TransactionID (32-byte hash)
    6. txIn.PreviousOutpoint.Index (4-bytes unsigned little endian)
    7. txIn.PreviousOutput.ScriptPubKeyVersion (2-bytes unsigned little endian)
    8. txIn.PreviousOutput.ScriptPubKey.length (8-bytes unsigned little endian)
    9. txIn.PreviousOutput.ScriptPubKey (serialized as script)
    10. txIn.PreviousOutput.Value (8-bytes unsigned little endian)
    11. txIn.Sequence (8-bytes unsigned little endian)
    12. txIn.SigOpCount (1-byte unsigned little endian)
    13. outputsHash (32-byte hash)
    14. tx.Locktime (8-bytes unsigned little endian)
    15. tx.SubnetworkID (20-byte hash)
    16. tx.Gas (8-bytes unsigned little endian)
    17. payloadHash (32-byte hash)
    18. SigHash type of the signature (1-byte unsigned little endian) 
        (Note: SigHash type is different from bitcoin where it's 4-bytes)
    
Where:
    tx - the transaction signed
    txIn - the transaction input signed

Semantics

The semantics of the original sighash types remain unchanged, except the followings:

  1. The way of serialization is changed;
  2. All sighash types commit to the amount being spent by the signed input
  3. SINGLE does not commit to the input index. When ANYONECANPAY is not set, the semantics are unchanged since previousOutputsHash and txIn.PreviousOutpoint.* together implictly commit to the input index. When SINGLE is used with ANYONECANPAY, omission of the index commitment allows permutation of the input-output pairs, as long as each pair is located at an equivalent index

The semantics of most values are straightforward, except the following:

previousOutputsHash

  • If ANYONECANPAY flag is set, then previousOutputsHash is a uint256 of 0x0000......0000
  • Otherwise previousOutputsHash is the Blake2b hash of the serialization of all input outpoints in the following format:
      1. previousOutpoint.TransactionId
      2. previousOutpoint.Index
    

sigOpCountsHash

  • If ANYONECANPAY flag is set, then previousOutputsHash is a uint256 of 0x0000......0000
  • Otherwise, sigOpCountsHash is the Blake2b hash of the serialization of the SigOpCount of all inputs

sequencesHash

  • If ANYONECANPAY, SINGLE or NONE sighash type is set, then sequencesHash is a uint256 of 0x0000......0000
  • Otherwise, sequencesHash is the Blake2b hash of the serialization of the Sequence of all inputs

outputsHash

  • If the sighashType SINGLE and the input index is larger or equal to the number of inputs or the sighashType is NONE, then outputs hash is a uint256 of 0x0000......0000
  • If sighashType is SINGLE and the input index is smaller then the number of outputs, then outputsHash is the Blake2b hash of the following format for the output with the same index as the input.
  • Otherwise, outputsHash is the Blake2b hash of the following format for all outputs.
  1. Value
  2. ScriptPublicKey.Version
  3. ScriptPublicKey.Script

payloadHash

  • If the tx is a native transaction (a.k.a. SubnetworkID = 0x0000......0000), then payloadHash is a uint256 of 0x0000......000
  • Otherwise payloadHash is the Blake2b hash of the transaction's Payload

Signing

Internal signing

To sign a transaction using the WASM SDK, you can call sign() method with an array of Signer-compatible class instances or a set of private keys.

    let xkey = new XPrivateKey(
        "kprv5y2qurMHCsXYrNfU3GCihuwG3vMqFji7PZXajMEqyBkNh9UZUJgoHYBLTKu1eM4MvUtomcXPQ3Sw9HZ5ebbM4byoUciHo1zrPJBQfqpLorQ",
        false,
        0n
    );

    let private_key = xkey.receiveKey(0);
    let transaction = signTransaction(tx, [private_key], true);
    transaction = transaction.toRpcTransaction();
    let result = await rpc.submitTransaction({transaction, allowOrphan:false});

External signing

In cases where inputs need to be signed externally, you can create a transaction, obtain it's input sighashes, sign these sighashes externally and then apply the signatures back to the transaction.

    let scriptHashes = tx.getScriptHashes();
    let signatures = scriptHashes.map(hash=>signScriptHash(hash, private_key));
    console.log("signatures", signatures)
    let transaction = tx.setSignatures(signatures);
    let result = await rpc.submitTransaction({transaction, allowOrphan:false});

VirtualTransaction

VirtualTransaction is a data structure designed to hold multiple instances of MutableTransaction. This data structure helps create large transactions with an unlimited number of outputs by creating and managing a number of smaller MutableTransactions. Contained MutableTransactions can be related or unrelated.

TODO

Tracking Transactions

When interacting with Kaspa nodes for the purpose of interfacing a wallet, you are meant to use UTXO data structures. UTXOs allow you to track per-address balances and create outgoing transactions.

Kaspa nodes currently do not provide an RPC call to lookup transactions by their id (txid).

As such, while you do have txid reference in each UTXO, you do not have a way to get any additional transaction information via a direct RPC lookup.

The kaspanet.io web wallet, for the purposes of record-keeping, creates transaction records containing txids aggregated from UTXOs.

Transaction records can also be looked up externally using the Kaspa block explorer: https://explorer.kaspa.org/txs

Side Effects

Due to the inability to get transaction by id, you can not tell exactly when a transactions have occurred. UTXOs do not contain any timestamp data, as such, you can only estimate the transaction timestamp based on the time the wallet has observed the transaction.

There are plans for a new API call that will allow approximation of transaction timestamps using the DAA score. This is planned for after the initial release of the Rusty Kaspa.

Wallets

TODO

Addresses

Address derivations

KDX/Kaspanet Web Wallet uses m/44'/972/0' derivation path (with 12 word seed phrases)

The Core Golang cli wallet and Kaspium uses m/44'/111111'/0' (with 24 word seed phrases)

972 and 111111 are coin types (they are different for historical reasons)

IMPORTANT: KDX wallets are not BIP-32 compatible, so KDX derivation path is supported but is considered deprecated.

UTXO Sets

UtxoSet class is provided by the WASM framework and offers an efficient ordered storage of any UTXO set. This class can be helpful when representing a wallet that may have transactions from different sets of addresses.

Contributing

This mdbook for Kaspa is available at https://github.com/aspectron/kaspa-mdbook/.

If you would like to contribute to the content:

cargo install mdbook
git clone https://github.com/aspectron/kaspa-mdbook
cd kaspa-mdbook
mdbook serve

Once started, you can navigate to http://localhost:3000

For Visual Studio Code, in the command palette, you can search for Simple Browser. This will allow for the book preview while it is being edited.

mdbook works like a wiki - it scans for links and creates corresponding markdown files if missing. To create new pages, just create links to destination files.

When you've made changes, please PR!