[JAVA] R3 Corda node high availability (HA) and how to configure it

This article is a reprint of the Medium article by TIS Co., Ltd. (approved). Body URL: https://medium.com/@TIS_BC_Prom/r3-corda%E3%83%8E%E3%83%BC%E3%83%89%E3%81%AE%E9%AB%98%E5% 8F% AF% E7% 94% A8% E6% 80% A7-ha-% E3% 81% A8% E3% 81% 9D% E3% 81% AE% E6% A7% 8B% E6% 88% 90% E6 % 96% B9% E6% B3% 95-34cb5dd409d1


Corda Enterprise officially supports "Hot-cold high availability deployment" for Corda nodes since the release of version 3.2. What kind of settings should be made for this "Hot-cold high availability deployment", what kind of behavior should it behave, and can high availability be guaranteed? I have confirmed about, so I will introduce it.

A. Introduction

This article uses the AWS environment to configure HA settings for Corda nodes. Also, all AWS instances are configured in the same VPC.

-PostgreSQL is used for the DB of the Corda node with HA settings. The other Corda nodes are using the default H2 DB.

-Each node is built using an EC2 instance. The OS is Ubuntu, 18.04 LT.

-To set up the CorDapp development environment, follow the official Corda documentation.

・ Corda official website that I referred to

B. Overall image of HA settings in this article

-The Corda node consists of 4 EC2 instances.

-The DB of the HA node (hereinafter referred to as Party B) uses the AWS RDS service.

-Party B's Artemis service uses AWS's EFS service (as a shared network drive).

-AWS ELB (Classic Load Balancer) is used as the load balancer required for PartyB.

Corda's "Hot-cold high availability deployment" is basically the idea of "HA clustering". It is introduced in "Shared Disk Configuration" on the R3 website. I will explain the role of the load balancer, EFS, and RDS of AWS used in this verification.

    1. The load balancer monitors the status of your PartyB instances and automatically routes traffic from other Party nodes to the active PartyB node. Note that the Corda function allows only one node in the HA cluster to start, and even if you try to start the second node, it will be in a waiting state. In this waiting state, it will be started automatically when the first unit becomes inactive. The IP address (representative address) of the load balancer is advertised to other Corda networks as the P2P address of PartyB. This is done by setting the load balancer IP as the P2P address of the node in the configuration file (build.gradle). This is described in Section F.
  1. EFS is used as a shared network drive for P2P message broker files. Therefore, P2P messages are accessible regardless of whether they are active or inactive. We are also using this network drive to see which node is active. In other words, the mechanism that only one node in the HA cluster starts is realized by using the network drive.

    1. RDS is used as a shared DB for HA nodes because the same DB must be referenced regardless of whether it is active or inactive.

1_Ru1l1qoLW-D9dOLbHMVq7w.jpg

Figure 1 Overview of HA settings in this article

C. Load balancer settings

The load balancer is used to redirect incoming traffic (P2P, RPC, and HTTP) to the active Corda node, for example the Hot Party B node in Figure 1. Below are the steps to build a load balancer.

    1. Visit the Load Balancer console page on AWS

2.jpg

Click the Create Load Balancer button to start creating the load balancer.

  1. Select Classic Load Balancer to complete the configuration.

3.jpg

    1. In the test environment of this article, all Corda nodes are in the same VPC, so the load balancer type of this environment is "Create an internal load balancer" instead of the "External" load balancer described on the official website. Must be checked.

Four. The load balun support and instance port values should be the same as the "p2pAddress" and "rpcSettings" values set in the PartyB node.conf file.

4.jpg

Five. Follow the official website (https://docs.corda.r3.com/hot-cold-deployment.html) to complete [Security Group] and [Health Check].

  1. Add two EC2 instances (Hot / Cold) of PartyB to your load balancer. Example:

6.jpg

    1. When you complete the entire process, a new load balancer will be created.

5.jpg

Make a note of the DNS name of the newly created load balancer. This will be used for later settings (Section F).

D. Create EFS (shared network drive for P2P message broker files).

Since the procedure is published on the official website and AWS (https://docs.aws.amazon.com/ja_jp/efs/latest/ug/getting-started.html), the detailed procedure is omitted, but the following I will explain the points to note.

  1. EFS must use the same subnet and security group as the PartyB node.

  2. Once the EFS is created, make a note of the Amazon EC2 mounting procedure (from your local VPC). We will use these steps later when mounting this EFS on two HA nodes (PartyB Hot / Cold).

1_9DdinUTL4ugbwN72gh8OaQ.jpeg

E. Creating RDS (PostgreSQL on AWS) and setting templates

About creating a PostgreSQL instance

Create a PostgreSQL instance on AWS. (The setting procedure is omitted) Here are some notes.

  1. According to Corda's official website, PostgreSQL 9.6 with JDBC Driver 42.1.4 has been tested. In this verification, I used PostgreSQL 10 using the same driver, but it worked normally without any problems. https://docs.corda.r3.com/releases/3.3/node-database.html?highlight=node%20database%20developer#postgresql

  2. Make the subnet and security group settings for your DB instance the same as for the LB and PartyB nodes.

  3. Make a note of the endpoint name of the newly created DB instance. This will be used for later configuration (Section F). 7.jpg

About setting up a PostgreSQL instance

  1. Use a third-party DB connection client tool such as pgAdmin4 to connect to the DB instance created above. Connect to your DB instance as an administrator and create a database.

  2. Use the official script to create the node user and schema.

(https://docs.corda.r3.com/releases/3.3/node-database.html#postgresql) 8.jpg

CREATE USER "my_user" WITH LOGIN PASSWORD 'my_password';
CREATE SCHEMA "my_schema";
GRANT USAGE, CREATE ON SCHEMA "my_schema" TO "my_user";
GRANT SELECT, INSERT, UPDATE, DELETE, REFERENCES ON ALL tables IN SCHEMA "my_schema" TO "my_user";
ALTER DEFAULT privileges IN SCHEMA "my_schema" GRANT SELECT, INSERT, UPDATE, DELETE, REFERENCES ON tables TO "my_user";
GRANT USAGE, SELECT ON ALL sequences IN SCHEMA "my_schema" TO "my_user";
ALTER DEFAULT privileges IN SCHEMA "my_schema" GRANT USAGE, SELECT ON sequences TO "my_user";
ALTER ROLE "my_user" SET search_path = "my_schema";

8.jpg

After executing the above script, the schema will be created in the created DB. 9.jpg

F. Node settings

The setting contents set for verification this time are described.

Add information about HA settings to the build.gradle file.

In order to configure a node (for example, PartyB) for HA settings, you only need to modify the relevant configuration information for that node. The following is a description including notes.

build.gradle

…<Abbreviation>…
node {
        name "O=PartyB,L=Tokyo,C=JP"
        p2pAddress "internal-PartyB-HA-LB-*********.ap-northeast-1.elb.amazonaws.com:10008"
        //The above address is the DNS name of the load balancer created in Section C.
        //Other Corda nodes are PartyB (Hot) via the load balancer above./Cold) Connect to the node.
        rpcSettings {
            address("0.0.0.0:10009")
            adminAddress("0.0.0.0:10049")
        }
        //rpc address 0.0.0.Assign to 0. This is because these addresses are used by the node itself at startup and do not need to be accessed by other nodes.
        //Also, as stated on the official website, if you set it to the IP address of the front machine, you may get the error "Cannot assign requested address".
        extraConfig = [
                jarDirs : ["/***/***/"],
                //Driver “postgresql-42.1.4.Please set to the directory where jar ”is located.
                "dataSourceProperties.dataSource.url": "jdbc:postgresql://ce4-pgsql.************.ap-northeast-1.rds.amazonaws.com:5432/PartyB_HA2",
                //DataSource above.url is DB endpoint (created in section E)/PGSQL Server port number (default is 5432)/The database name (created in section E).
                "dataSourceProperties.dataSourceClassName": "org.postgresql.ds.PGSimpleDataSource",
                "dataSourceProperties.dataSource.user": "*****",
                "dataSourceProperties.dataSource.password": "*****",
                "database.transactionIsolationLevel": "READ_COMMITTED",
                "database.schema": "my_schema",
                "database.runMigration": "true"
                //To be able to create the required tables at node startup, database.runMigration Set to "true".
        ]
        cordapps = [
                "$project.group:cordapp-contracts-states:$project.version",
                "$project.group:cordapp:$project.version"
        ]
        rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
    }
…<Abbreviation>…

Build and deploy G. CorDapps

Save the modified <build.gradle> file above and run "./gradlew deployNodes" in the CorDapps directory to start compiling CorDapps.

If the CorDapps compilation completes successfully, you will see a message similar to the one below.

…<Abbreviation>…
node {
        name "O=PartyB,L=Tokyo,C=JP"
        p2pAddress "internal-PartyB-HA-LB-*********.ap-northeast-1.elb.amazonaws.com:10008"
        //The above address is the DNS name of the load balancer created in Section C.
        //Other Corda nodes are PartyB (Hot) via the load balancer above./Cold) Connect to the node.
        rpcSettings {
            address("0.0.0.0:10009")
            adminAddress("0.0.0.0:10049")
        }
        //rpc address 0.0.0.Assign to 0. This is because these addresses are used by the node itself at startup and do not need to be accessed by other nodes.
        //Also, as stated on the official website, if you set it to the IP address of the front machine, you may get the error "Cannot assign requested address".
        extraConfig = [
                jarDirs : ["/***/***/"],
                //Driver “postgresql-42.1.4.Please set to the directory where jar ”is located.
                "dataSourceProperties.dataSource.url": "jdbc:postgresql://ce4-pgsql.************.ap-northeast-1.rds.amazonaws.com:5432/PartyB_HA2",
                //DataSource above.url is DB endpoint (created in section E)/PGSQL Server port number (default is 5432)/The database name (created in section E).
                "dataSourceProperties.dataSourceClassName": "org.postgresql.ds.PGSimpleDataSource",
                "dataSourceProperties.dataSource.user": "*****",
                "dataSourceProperties.dataSource.password": "*****",
                "database.transactionIsolationLevel": "READ_COMMITTED",
                "database.schema": "my_schema",
                "database.runMigration": "true"
                //To be able to create the required tables at node startup, database.runMigration Set to "true".
        ]
        cordapps = [
                "$project.group:cordapp-contracts-states:$project.version",
                "$project.group:cordapp:$project.version"
        ]
        rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
    }
…<Abbreviation>…

The directories for all corda nodes, including PartyB, are now created under “build / nodes”. The PartyB node.conf file looks like this:

ataSourceProperties {
    dataSource {
        password=*****
        url="jdbc:postgresql://ce4-pgsql.************.ap-northeast-1.rds.amazonaws.com:5432/PartyB_HA2"
        user=*****
    }
    dataSourceClassName="org.postgresql.ds.PGSimpleDataSource"
}
database {
    runMigration="true"
    schema="my_schema"
    transactionIsolationLevel="READ_COMMITTED"
}
devMode=true
jarDirs=[
    "/***/***/"
]
myLegalName="O=PartyB,L=Osaka,C=JP"
p2pAddress="internal- PartyB-HA-LB -*********.ap-northeast-1.elb.amazonaws.com:10008"
rpcSettings {
    address="0.0.0.0:10009"
    adminAddress="0.0.0.0:10049"
}security {
    authService {
        dataSource {
            type=INMEMORY
            users=[
                {
                    password=test
                    permissions=[
                        ALL
                    ]
                    user=user1
                }
            ]
        }
    }
}

In addition, the database has all the tables needed by Corda.

10.jpg

Please note that the other tables required by CorDapps have not yet been created. This will be discussed later in Section I.

Then copy the directory of each corda node to the corresponding server. .. In this article, we'll copy it to an EC2 instance on each corda node. Copy the HA node (PartyB) directory to the two EC2 instances (HA-Hot (AZ1) and HA-Cold (AZ1)) that you added to your load balancer in step 4 of section C.

https://gist.github.com/luomin/263460bc930bc30ab719dd390394016c#file-logs-in-partyb-cold-node-txn-b

11.jpg

H. Set up a shared drive for the HA node.

Log in to the EC2 instances for the above two Party Bs. Complete the following settings for both instances.

  1. If the directory named "artemis" does not exist under the PartyB directory, create it.

mkdir artemis

  1. Mount the EFS drive in the "artemis" directory

sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-*******.efs.ap-northeast-1.amazonaws.com:/ artemis

// See the note in Session D to see the above. ..

  1. Set the owner of the "artemis" directory to be the same as the owner of this corda node.

e.g. chown ubuntu:ubuntu artemis

If they are not the same, you will see an error similar to the following when launching the PartyB node:

---------- Corda Enterprise Edition 4.0 (8dcaeef) -------------------
Tip: If you don't wish to use the shell it can be disabled with the --no-local-shell flag
Logs can be found in                    : /path to your cordapp directory/build/nodes/PartyB/logs
! ATTENTION: This node is running in development mode!  This is not safe for production deployment.
Database connection url is              : jdbc:postgresql://ce4-pgsql.************.ap-northeast-1.rds.amazonaws.com:5432/PartyB_HA2
Advertised P2P messaging addresses      : internal- PartyB-HA-LB-********.ap-northeast-1.elb.amazonaws.com:10008
RPC connection address                  : 0.0.0.0:10009
RPC admin connection address            : 0.0.0.0:10049
[ERROR] 03:25:07+0000 [main] internal.Node.start - Messaging service could not be started.
Shutting down ...
[ERROR] 03:25:07+0000 [main] internal.NodeStartupLogging.invoke - Exception during node startup [errorCode=b46fui, moreInformationAt=https://errors.corda.net/ENT/4.0/b46fui]

  1. The JDBC driver on the EC2 instance may be in a different directory, so double-check that the "jarDirs" value in PartyB's node.conf is correct. Please correct if necessary.

I. Start all corda nodes

Log in to all your EC2 instances and access the corda node directory you copied there.

Run "java -jar corda.jar" in that directory to start the corda node

However, when you start the HA node (PartyB), you may receive an error message similar to the following:

------------------ Corda Enterprise Edition 4.0 (8dcaeef) ---------------------------
Tip: You can issue SQL queries to the database from the Corda shell with the jdbc command
Logs can be found in                    : /path to PartyB folder/logs
! ATTENTION: This node is running in development mode!  This is not safe for production deployment.
Database connection url is              : jdbc:postgresql://ce4-pgsql.***********.ap-northeast-1.rds.amazonaws.com:5432/PartyB_HA2
[ERROR] 04:24:40+0000 [main] internal.NodeStartupLogging.invoke - Could not create the DataSource: No migration defined for schema: poc.schemas.**Schema v1: Could not create the DataSource: No migration defined for schema: poc.schemas.**Schema v1 [errorCode=5k06pt, moreInformationAt=https://errors.corda.net/ENT/4.0/5k06pt]

This is because the schema / table required for CorDapps has not been created in PostgreSQL DB as described in Section G.

To create the required "migration script", run the following command in the working directory of Section G (the environment where you build CorDapp).


java -jar tools-database-manager-4.0.jar - base-directory /full-path-to-your-cordapp/build/nodes/PartyB - create-migration-sql-for-cordapp
cd build/nodes/PartyB
jar cvf cordapps/migration.jar migration
cd ../../..
java -jar tools-database-manager-4.0.jar - base-directory /full-path-to-your-cordapp/build/nodes/PartyB - dry-run
java -jar tools-database-manager-4.0.jar - base-directory /full-path-to-your-cordapp/build/nodes/PartyB - execute-migration

The official Corda website has instructions for these commands. https://docs.corda.r3.com/database-management.html?highlight=node%20migration#adding-database-migration-scripts-retrospectively-to-an-existing-cordapp

The migration script is created as a .jar file located in PartyB's CorDapps directory.

You have now created the schema required for CorDapps in your PostgreSQL DB. For example, the table “qd_states” required by CorDapp used in this article has been newly created.

12.jpg

The EC2 instance HA-Hot (AZ1) now successfully launches the PartyB node with a console message similar to the following: HA-Cold (AZ1) will be in the startup waiting state.

ubuntu@ip-**.**.**.**$ java -jar corda.jar

*************************************************************************************************************************************
*  All rights reserved.                                                                                                             *
*  This software is proprietary to and embodies the confidential technology of R3 LLC ("R3").                                       *
*  Possession, use, duplication or dissemination of the software is authorized only pursuant to a valid written license from R3.    *
*  IF YOU DO NOT HAVE A VALID WRITTEN LICENSE WITH R3, DO NOT USE THIS SOFTWARE.                                                    *
*************************************************************************************************************************************

   ______               __         _____ _   _ _____ _____ ____  ____  ____  ___ ____  _____
  / ____/     _________/ /___ _   | ____| \ | |_   _| ____|  _ \|  _ \|  _ \|_ _/ ___|| ____|
 / /     __  / ___/ __  / __ `/   |  _| |  \| | | | |  _| | |_) | |_) | |_) || |\___ \|  _|
/ /___  /_/ / /  / /_/ / /_/ /    | |___| |\  | | | | |___|  _ <|  __/|  _ < | | ___) | |___
\____/     /_/   \__,_/\__,_/     |_____|_| \_| |_| |_____|_| \_\_|   |_| \_\___|____/|_____|

--- Corda Enterprise Edition 4.0 (8dcaeef) --------------------------------------------------

Tip: You can issue SQL queries to the database from the Corda shell with the jdbc command

Logs can be found in                    : /full-path-to-PartyB-folder/logs
! ATTENTION: This node is running in development mode!  This is not safe for production deployment.
Database connection url is              : jdbc:postgresql://ce4-pgsql.********.ap-northeast-1.rds.amazonaws.com:5432/PartyB_HA2
Advertised P2P messaging addresses      : internal- PartyB-HA-LB-********.ap-northeast-1.elb.amazonaws.com:10008
RPC connection address                  : 0.0.0.0:10009
RPC admin connection address            : 0.0.0.0:10049
Loaded 3 CorDapp(s)                     : Contract CorDapp: ***
Node for "PartyB" started up and registered in 16.17 sec


Welcome to the Corda interactive shell.
Useful commands include 'help' to see what is available, and 'bye' to shut down the node.

Thu May 23 05:26:59 UTC 2019>>>

J. Test HA functionality

Now all the coder nodes have successfully started on the EC2 instance, as shown in Figure 1.

For example, if you issue a transaction (txn-a) between PartyA and PartyB, txn-a completes successfully. If you check the log file of the PartyA node, it is recorded that PartyA actually connected to the load balancer and connected to PartyB:

[INFO ] 2019-05-23T05:48:44,588Z [Messaging executor] messaging.P2PMessagingClient.createQueueIfAbsent - Create fresh queue internal.peers.DL33z2w7VP2FYTRrowmhC54mZVZQjtHhYxN2PzWL5hLovr bound on same address {}
[INFO ] 2019-05-23T05:48:44,629Z [Thread-28 (ActiveMQ-client-global-threads)] bridging.BridgeControlListener.processControlMessage - Received bridge control message Create(nodeIdentity=O=PartyA, L=Osaka, C=JP, bridgeInfo=BridgeEntry(queueName=internal.peers.DL33z2w7VP2FYTRrowmhC54mZVZQjtHhYxN2PzWL5hLovr, targets=[internal- PartyB-HA-LB-********.ap-northeast-1.elb.amazonaws.com:10008], legalNames=[O=PartyB, L=Osaka, C=JP], serviceAddress=false)) {
// note: above msg shows PartyA was trying to connect the LoadBalancer.  "targets=[internal- PartyB-HA-LB-********.ap-northeast-1.elb.amazonaws.com:10008"
[INFO ] 2019-05-23T05:48:44,629Z [Thread-28 (ActiveMQ-client-global-threads)] bridging.LoopbackBridgeManager.deployBridge - Deploying AMQP bridge for internal.peers.DL33z2w7VP2FYTRrowmhC54mZVZQjtHhYxN2PzWL5hLovr, source O=PartyA, L=Osaka, C=JP {}
[INFO ] 2019-05-23T05:48:44,648Z [Thread-28 (ActiveMQ-client-global-threads)] bridging.AMQPBridgeManager$AMQPBridge.invoke - Create new AMQP bridge {legalNames=O=PartyB, L=Osaka, C=JP, maxMessageSize=10485760, queueName=internal.peers.DL33z2w7VP2FYTRrowmhC54mZVZQjtHhYxN2PzWL5hLovr, source=O=PartyA, L=Osaka, C=JP, targets=internal- PartyB-HA-LB-********.ap-northeast-1.elb.amazonaws.com:10008}
[INFO ] 2019-05-23T05:48:44,649Z [Thread-28 (ActiveMQ-client-global-threads)] netty.AMQPClient.start - connect to: internal- PartyB-HA-LB-********.ap-northeast-1.elb.amazonaws.com:10008 {}
[INFO ] 2019-05-23T05:48:44,784Z [nioEventLoopGroup-2-1] netty.AMQPClient.operationComplete - Connected to internal- PartyB-HA-LB-********.ap-northeast-1.elb.amazonaws.com:10008 {}
[INFO ] 2019-05-23T05:48:44,788Z [nioEventLoopGroup-2-1] netty.AMQPChannelHandler.invoke - New client connection 790466bd from internal- PartyB-HA-LB-********.ap-northeast-1.elb.amazonaws.com/26.132.138.97:10008 to /26.132.139.158:37634 {allowedRemoteLegalNames=O=PartyB, L=Osaka, C=JP, localCert=null, remoteAddress=internal- PartyB-HA-LB-********.ap-northeast-1.elb.amazonaws.com/26.132.138.97:10008, remoteCert=null, serverMode=false}
[INFO ] 2019-05-23T05:48:44,846Z [nioEventLoopGroup-2-1] netty.LoggingTrustManagerWrapper.checkServerTrusted - Check Server Certpath:^M
  C=JP,L=Osaka,O=PartyB[E341A0565CDBFF8699CBA8CB377382C0513CB0EA] issued by C=JP,L=Osaka,O=PartyB[1D6C7F86F4F99AED0CE281FE996A0C70DC5EFBB1]^M
  C=JP,L=Osaka,O=PartyB[1D6C7F86F4F99AED0CE281FE996A0C70DC5EFBB1] issued by C=US,L=New York,OU=Corda,O=R3 HoldCo LLC,CN=Corda Doorman CA[EBEE2E30152940AE19981ED86FE37D7F07A2C213]^M
  C=US,L=New York,OU=Corda,O=R3 HoldCo LLC,CN=Corda Doorman CA[EBEE2E30152940AE19981ED86FE37D7F07A2C213] issued by CN=Corda Node Root CA,O=R3,OU=corda,L=London,C=UK[7CAEA9DFB948012B13890B9AE645851C39170773]^M
  CN=Corda Node Root CA,O=R3,OU=corda,L=London,C=UK[7CAEA9DFB948012B13890B9AE645851C39170773] issued by CN=Corda Node Root CA,O=R3,OU=corda,L=London,C=UK[null] {}
[INFO ] 2019-05-23T05:48:44,891Z [nioEventLoopGroup-2-1] netty.AMQPChannelHandler.invoke - Handshake completed with subject: O=PartyB, L=Osaka, C=JP, requested server name: e8c5bf64fbba23612e26a3302a65c60a.corda.net. {allowedRemoteLegalNames=O=PartyB, L=Osaka, C=JP, localCert=O=PartyA, L=Osaka, C=JP, remoteAddress=internal- PartyB-HA-LB-********.ap-northeast-1.elb.amazonaws.com/26.132.138.97:10008, remoteCert=O=PartyB, L=Osaka, C=JP, serverMode=false}
[INFO ] 2019-05-23T05:48:45,015Z [nioEventLoopGroup-2-1] bridging.AMQPBridgeManager$AMQPBridge.invoke - Bridge Connected {legalNames=O=PartyB, L=Osaka, C=JP, maxMessageSize=10485760, queueName=internal.peers.DL33z2w7VP2FYTRrowmhC54mZVZQjtHhYxN2PzWL5hLovr, source=O=PartyA, L=Osaka, C=JP, targets=internal- PartyB-HA-LB-********.ap-northeast-1.elb.amazonaws.com:10008}

Logs in PartyA (txn-a)

You can also see that the active node of PartyB is processing the above txn-a with PartyA.

[INFO ] 2019-05-23T05:37:25,681Z [Node thread-1] internal.Node.registerJolokiaReporter - Registering Jolokia JMX reporter: {}
//In PartyB's Cold Node, txn-There is no log information about a

logs in PartyB cold (txn-a)

[INFO ] 2019-05-23T05:26:59,167Z [Node thread-1] internal.Node.registerJolokiaReporter - Registering Jolokia JMX reporter: {}
//After that, txn-It is a log of a.
[INFO ] 2019-05-23T05:49:18,883Z [Messaging executor] messaging.P2PMessagingClient.createQueueIfAbsent - Create fresh queue internal.peers.DLANwREshETUbbSsJhFT4PHMHLqnSvHBzVrzxTd6faYGeF bound on same address {}
[INFO ] 2019-05-23T05:49:18,978Z [Thread-8 (ActiveMQ-client-global-threads)] bridging.BridgeControlListener.processControlMessage - Received bridge control message Create(nodeIdentity=O=PartyB, L=Osaka, C=JP, bridgeInfo=BridgeEntry(queueName=internal.peers.DLANwREshETUbbSsJhFT4PHMHLqnSvHBzVrzxTd6faYGeF, targets=[26.132.139.158:10005], legalNames=[O=PartyA, L=Osaka, C=JP], serviceAddress=false)) {}
[INFO ] 2019-05-23T05:49:18,979Z [Thread-8 (ActiveMQ-client-global-threads)] bridging.LoopbackBridgeManager.deployBridge - Deploying AMQP bridge for internal.peers.DLANwREshETUbbSsJhFT4PHMHLqnSvHBzVrzxTd6faYGeF, source O=PartyB, L=Osaka, C=JP {}
[INFO ] 2019-05-23T05:49:18,997Z [Thread-8 (ActiveMQ-client-global-threads)] bridging.AMQPBridgeManager$AMQPBridge.invoke - Create new AMQP bridge {legalNames=O=PartyA, L=Osaka, C=JP, maxMessageSize=10485760, queueName=internal.peers.DLANwREshETUbbSsJhFT4PHMHLqnSvHBzVrzxTd6faYGeF, source=O=PartyB, L=Osaka, C=JP, targets=26.132.139.158:10005}
[INFO ] 2019-05-23T05:49:18,998Z [Thread-8 (ActiveMQ-client-global-threads)] netty.AMQPClient.start - connect to: 26.132.139.158:10005 {}
[INFO ] 2019-05-23T05:49:19,124Z [nioEventLoopGroup-2-1] netty.AMQPClient.operationComplete - Connected to 26.132.139.158:10005 {}
[INFO ] 2019-05-23T05:49:19,149Z [nioEventLoopGroup-2-1] netty.AMQPChannelHandler.invoke - New client connection 16364379 from /26.132.139.158:10005 to /26.132.142.60:36896 {allowedRemoteLegalNames=O=PartyA, L=Osaka, C=JP, localCert=null, remoteAddress=/26.132.139.158:10005, remoteCert=null, serverMode=false}
[INFO ] 2019-05-23T05:49:19,258Z [nioEventLoopGroup-2-1] netty.LoggingTrustManagerWrapper.checkServerTrusted - Check Server Certpath:^M
  C=JP,L=Osaka,O=PartyA[A5B65A4EF961EB9963C217598BDD2B3BAFD81263] issued by C=JP,L=Osaka,O=PartyA[8F7435C49DD49B99DB3D5083E40D8FEE34A62DDB]^M
  C=JP,L=Osaka,O=PartyA[8F7435C49DD49B99DB3D5083E40D8FEE34A62DDB] issued by C=US,L=New York,OU=Corda,O=R3 HoldCo LLC,CN=Corda Doorman CA[EBEE2E30152940AE19981ED86FE37D7F07A2C213]^M
  C=US,L=New York,OU=Corda,O=R3 HoldCo LLC,CN=Corda Doorman CA[EBEE2E30152940AE19981ED86FE37D7F07A2C213] issued by CN=Corda Node Root CA,O=R3,OU=corda,L=London,C=UK[7CAEA9DFB948012B13890B9AE645851C39170773]^M
  CN=Corda Node Root CA,O=R3,OU=corda,L=London,C=UK[7CAEA9DFB948012B13890B9AE645851C39170773] issued by CN=Corda Node Root CA,O=R3,OU=corda,L=London,C=UK[null] {}
[INFO ] 2019-05-23T05:49:19,366Z [nioEventLoopGroup-2-1] netty.AMQPChannelHandler.invoke - Handshake completed with subject: O=PartyA, L=Osaka, C=JP, requested server name: 26a6b0b27deba4660c3598ce3f39c668.corda.net. {allowedRemoteLegalNames=O=PartyA, L=Osaka, C=JP, localCert=O=PartyB, L=Osaka, C=JP, remoteAddress=/26.132.139.158:10005, remoteCert=O=PartyA, L=Osaka, C=JP, serverMode=false}
[INFO ] 2019-05-23T05:49:19,461Z [nioEventLoopGroup-2-1] bridging.AMQPBridgeManager$AMQPBridge.invoke - Bridge Connected {legalNames=O=PartyA, L=Osaka, C=JP, maxMessageSize=10485760, queueName=internal.peers.DLANwREshETUbbSsJhFT4PHMHLqnSvHBzVrzxTd6faYGeF, source=O=PartyB, L=Osaka, C=JP, targets=26.132.139.158:10005}
[INFO ] 2019-05-23T05:49:19,515Z [nioEventLoopGroup-2-1] engine.ConnectionStateMachine.invoke - Connection local open org.apache.qpid.proton.engine.impl.ConnectionImpl@4a5e5c39 {localLegalName=O=PartyB, L=Osaka, C=JP, remoteLegalName=O=PartyA, L=Osaka, C=JP, serverMode=false}
[INFO ] 2019-05-23T05:49:19,740Z [flow-worker] corda.flow.call - Received transaction acknowledgement request from party O=PartyA, L=Osaka, C=JP. {fiber-id=10000001, flow-id=1087877d-3892-4501-9b72-c714664c71b5, invocation_id=96c77912-6cea-4614-aced-8bba36aa5a34, invocation_timestamp=2019-05-23T05:49:18.397Z, origin=O=PartyA, L=Osaka, C=JP, session_id=96c77912-6cea-4614-aced-8bba36aa5a34, session_timestamp=2019-05-23T05:49:18.397Z, thread-id=268, tx_id=E4DA424F4AE1C5ABC22E2104E2481C2C811A54F61002154F3A1EFF80166726E9}
[INFO ] 2019-05-23T05:49:19,925Z [flow-worker] corda.flow.call - Transaction dependencies resolution completed. {fiber-id=10000001, flow-id=1087877d-3892-4501-9b72-c714664c71b5, invocation_id=96c77912-6cea-4614-aced-8bba36aa5a34, invocation_timestamp=2019-05-23T05:49:18.397Z, origin=O=PartyA, L=Osaka, C=JP, session_id=96c77912-6cea-4614-aced-8bba36aa5a34, session_timestamp=2019-05-23T05:49:18.397Z, thread-id=264, tx_id=E4DA424F4AE1C5ABC22E2104E2481C2C811A54F61002154F3A1EFF80166726E9}
[INFO ] 2019-05-23T05:49:23,419Z [flow-worker] corda.flow.call - Received transaction acknowledgement request from party O=PartyA, L=Osaka, C=JP. {fiber-id=10000001, flow-id=1087877d-3892-4501-9b72-c714664c71b5, invocation_id=96c77912-6cea-4614-aced-8bba36aa5a34, invocation_timestamp=2019-05-23T05:49:18.397Z, origin=O=PartyA, L=Osaka, C=JP, session_id=96c77912-6cea-4614-aced-8bba36aa5a34, session_timestamp=2019-05-23T05:49:18.397Z, thread-id=268, tx_id=E4DA424F4AE1C5ABC22E2104E2481C2C811A54F61002154F3A1EFF80166726E9}
// note: the msg in line 20 shows it was this PartyB (hot) node process the txn-a with PartyA

logs in PartyB hot (txn-a)

After that, stop PartyB's Hot Node and issue a second transaction (txn-b).

You can see that TXN-B can also be completed successfully.

By the way, if you check PartyA's log, you can see that PartyA detects that PartyB has been disconnected immediately after stopping the active node of PartyB, and reconnects to PartyB's load balancer.

[INFO ] 2019-05-23T05:48:50,765Z [nioEventLoopGroup-2-3] engine.ConnectionStateMachine.invoke - Connection local open org.apache.qpid.proton.engine.impl.ConnectionImpl@7f99bfe6 {localLegalName=O=PartyA, L=Osaka, C=JP, remoteLegalName=O=Regulator, L=Osaka, C=JP, serverMode=false}
[INFO ] 2019-05-23T07:15:10,415Z [nioEventLoopGroup-2-1] netty.AMQPClient.operationComplete - Disconnected from internal- PartyB-HA-LB-*********.ap-northeast-1.elb.amazonaws.com:10008 {}
[INFO ] 2019-05-23T07:15:10,416Z [nioEventLoopGroup-2-1] netty.AMQPChannelHandler.invoke - Closed client connection 790466bd from internal-PartyB-HA-LB-*********.ap-northeast-1.elb.amazonaws.com/26.132.138.97:10008 to /26.132.139.158:37634 {allowedRemoteLegalNames=O=PartyB, L=Osaka, C=JP, localCert=O=PartyA, L=Osaka, C=JP, remoteAddress=internal-PartyB-HA-LB -*********.ap-northeast-1.elb.amazonaws.com/26.132.138.97:10008, remoteCert=O=PartyB, L=Osaka, C=JP, serverMode=false}
[INFO ] 2019-05-23T07:15:10,417Z [nioEventLoopGroup-2-1] bridging.AMQPBridgeManager$AMQPBridge.invoke - Bridge Disconnected {legalNames=O=PartyB, L=Osaka, C=JP, maxMessageSize=10485760, queueName=internal.peers.DL33z2w7VP2FYTRrowmhC54mZVZQjtHhYxN2PzWL5hLovr, source=O=PartyA, L=Osaka, C=JP, targets=internal-PartyB-HA-LB-*********.ap-northeast-1.elb.amazonaws.com:10008}
[INFO ] 2019-05-23T07:15:10,422Z [nioEventLoopGroup-2-1] engine.ConnectionStateMachine.invoke - Connection local close org.apache.qpid.proton.engine.impl.ConnectionImpl@1d3c4dab {localLegalName=O=PartyA, L=Osaka, C=JP, remoteLegalName=O=PartyB, L=Osaka, C=JP, serverMode=false}
[INFO ] 2019-05-23T07:15:10,424Z [nioEventLoopGroup-2-1] engine.ConnectionStateMachine.invoke - Transport Error TransportImpl [_connectionEndpoint=org.apache.qpid.proton.engine.impl.ConnectionImpl@1d3c4dab, org.apache.qpid.proton.engine.impl.TransportImpl@51b2ec6f] {localLegalName=O=PartyA, L=Osaka, C=JP, remoteLegalName=O=PartyB, L=Osaka, C=JP, serverMode=false}
[INFO ] 2019-05-23T07:15:10,424Z [nioEventLoopGroup-2-1] engine.ConnectionStateMachine.invoke - Error: connection aborted {localLegalName=O=PartyA, L=Osaka, C=JP, remoteLegalName=O=PartyB, L=Osaka, C=JP, serverMode=false}
[INFO ] 2019-05-23T07:15:11,416Z [nioEventLoopGroup-2-1] netty.AMQPClient.nextTarget - Retry connect to internal-PartyB-HA-LB-*********.ap-northeast-1.elb.amazonaws.com:10008 {}
[INFO ] 2019-05-23T07:15:11,431Z [nioEventLoopGroup-2-2] netty.AMQPClient.operationComplete - Connected to internal-PartyB-HA-LB-*********.ap-northeast-1.elb.amazonaws.com:10008 {}
[INFO ] 2019-05-23T07:15:11,432Z [nioEventLoopGroup-2-2] netty.AMQPChannelHandler.invoke - New client connection 2fab52c6 from internal-PartyB-HA-LB-*********.ap-northeast-1.elb.amazonaws.com/26.132.138.97:10008 to /26.132.139.158:37842 {allowedRemoteLegalNames=O=PartyB, L=Osaka, C=JP, localCert=null, remoteAddress=internal-PartyB-HA-LB-*********.ap-northeast-1.elb.amazonaws.com/26.132.138.97:10008, remoteCert=null, serverMode=false}
[INFO ] 2019-05-23T07:15:11,450Z [nioEventLoopGroup-2-2] netty.LoggingTrustManagerWrapper.checkServerTrusted - Check Server Certpath:^M
  C=JP,L=Osaka,O=PartyB[E341A0565CDBFF8699CBA8CB377382C0513CB0EA] issued by C=JP,L=Osaka,O=PartyB[1D6C7F86F4F99AED0CE281FE996A0C70DC5EFBB1]^M
  C=JP,L=Osaka,O=PartyB[1D6C7F86F4F99AED0CE281FE996A0C70DC5EFBB1] issued by C=US,L=New York,OU=Corda,O=R3 HoldCo LLC,CN=Corda Doorman CA[EBEE2E30152940AE19981ED86FE37D7F07A2C213]^M
  C=US,L=New York,OU=Corda,O=R3 HoldCo LLC,CN=Corda Doorman CA[EBEE2E30152940AE19981ED86FE37D7F07A2C213] issued by CN=Corda Node Root CA,O=R3,OU=corda,L=London,C=UK[7CAEA9DFB948012B13890B9AE645851C39170773]^M
  CN=Corda Node Root CA,O=R3,OU=corda,L=London,C=UK[7CAEA9DFB948012B13890B9AE645851C39170773] issued by CN=Corda Node Root CA,O=R3,OU=corda,L=London,C=UK[null] {}
[INFO ] 2019-05-23T07:15:11,479Z [nioEventLoopGroup-2-2] netty.AMQPChannelHandler.invoke - Handshake completed with subject: O=PartyB, L=Osaka, C=JP, requested server name: e8c5bf64fbba23612e26a3302a65c60a.corda.net. {allowedRemoteLegalNames=O=PartyB, L=Osaka, C=JP, localCert=O=PartyA, L=Osaka, C=JP, remoteAddress=internal-PartyB-HA-LB-*********.ap-northeast-1.elb.amazonaws.com/26.132.138.97:10008, remoteCert=O=PartyB, L=Osaka, C=JP, serverMode=false}
[INFO ] 2019-05-23T07:15:11,481Z [nioEventLoopGroup-2-2] bridging.AMQPBridgeManager$AMQPBridge.invoke - Bridge Connected {legalNames=O=PartyB, L=Osaka, C=JP, maxMessageSize=10485760, queueName=internal.peers.DL33z2w7VP2FYTRrowmhC54mZVZQjtHhYxN2PzWL5hLovr, source=O=PartyA, L=Osaka, C=JP, targets=internal-PartyB-HA-LB-*********.ap-northeast-1.elb.amazonaws.com:10008}
[INFO ] 2019-05-23T07:15:11,484Z [nioEventLoopGroup-2-2] engine.ConnectionStateMachine.invoke - Connection local open org.apache.qpid.proton.engine.impl.ConnectionImpl@6455ea8d {localLegalName=O=PartyA, L=Osaka, C=JP, remoteLegalName=O=PartyB, L=Osaka, C=JP, serverMode=false}
//note: the msg in line 2 and line 8 show the PartyA detected the disconnection of PartyB(hot) and retry to connect to Loadbalancer.

Below is the Party A Txn-b log. This indicates that PartyA has successfully received the PartyB (Cold) signature.

[INFO ] 2019-05-23T07:15:22,234Z [pool-8-thread-3] shell.StartShellCommand.main - Executing command "our txn-b commands here", {}
[INFO ] 2019-05-23T07:15:28,650Z [flow-worker] corda.flow.call - Sending transaction to notary: O=Notary, L=Osaka, C=JP. {actor_id=internalShell, actor_owning_identity=O=PartyA, L=Osaka, C=JP, actor_store_id=NODE_CONFIG, fiber-id=10000002, flow-id=b6aa33d8-307a-4338-b8bc-9ce8e3cf1962, invocation_id=2a2f2483-1a24-4702-8d0f-9f02072f27ab, invocation_timestamp=2019-05-23T07:15:22.263Z, origin=internalShell, session_id=e4fe6f15-d925-42e5-8510-30d00af8a86f, session_timestamp=2019-05-23T04:28:05.971Z, thread-id=903, tx_id=DC1B4FBE9E052302EFDB733086C651C7615EDFFE6D37BD23450E45D9BB530D16}
…
[INFO ] 2019-05-23T07:15:30,639Z [flow-worker] corda.flow.call - Notary responded. {actor_id=internalShell, actor_owning_identity=O=PartyA, L=Osaka, C=JP, actor_store_id=NODE_CONFIG, fiber-id=10000002, flow-id=b6aa33d8-307a-4338-b8bc-9ce8e3cf1962, invocation_id=2a2f2483-1a24-4702-8d0f-9f02072f27ab, invocation_timestamp=2019-05-23T07:15:22.263Z, origin=internalShell, session_id=e4fe6f15-d925-42e5-8510-30d00af8a86f, session_timestamp=2019-05-23T04:28:05.971Z, thread-id=903, tx_id=DC1B4FBE9E052302EFDB733086C651C7615EDFFE6D37BD23450E45D9BB530D16}
[INFO ] 2019-05-23T07:15:30,649Z [flow-worker] corda.flow.notariseAndRecord - Recording transaction locally. {actor_id=internalShell, actor_owning_identity=O=PartyA, L=Osaka, C=JP, actor_store_id=NODE_CONFIG, fiber-id=10000002, flow-id=b6aa33d8-307a-4338-b8bc-9ce8e3cf1962, invocation_id=2a2f2483-1a24-4702-8d0f-9f02072f27ab, invocation_timestamp=2019-05-23T07:15:22.263Z, origin=internalShell, session_id=e4fe6f15-d925-42e5-8510-30d00af8a86f, session_timestamp=2019-05-23T04:28:05.971Z, thread-id=903, tx_id=DC1B4FBE9E052302EFDB733086C651C7615EDFFE6D37BD23450E45D9BB530D16}
[INFO ] 2019-05-23T07:15:30,708Z [flow-worker] corda.flow.notariseAndRecord - Recorded transaction locally successfully. {actor_id=internalShell, actor_owning_identity=O=PartyA, L=Osaka, C=JP, actor_store_id=NODE_CONFIG, fiber-id=10000002, flow-id=b6aa33d8-307a-4338-b8bc-9ce8e3cf1962, invocation_id=2a2f2483-1a24-4702-8d0f-9f02072f27ab, invocation_timestamp=2019-05-23T07:15:22.263Z, origin=internalShell, session_id=e4fe6f15-d925-42e5-8510-30d00af8a86f, session_timestamp=2019-05-23T04:28:05.971Z, thread-id=903, tx_id=DC1B4FBE9E052302EFDB733086C651C7615EDFFE6D37BD23450E45D9BB530D16}
[INFO ] 2019-05-23T07:15:30,847Z [flow-worker] corda.flow.call - Party O=PartyB, L=Osaka, C=JP received the transaction. {actor_id=internalShell, actor_owning_identity=O=PartyA, L=Osaka, C=JP, actor_store_id=NODE_CONFIG, fiber-id=10000002, flow-id=b6aa33d8-307a-4338-b8bc-9ce8e3cf1962, invocation_id=2a2f2483-1a24-4702-8d0f-9f02072f27ab, invocation_timestamp=2019-05-23T07:15:22.263Z, origin=internalShell, session_id=e4fe6f15-d925-42e5-8510-30d00af8a86f, session_timestamp=2019-05-23T04:28:05.971Z, thread-id=912, tx_id=DC1B4FBE9E052302EFDB733086C651C7615EDFFE6D37BD23450E45D9BB530D16}
[INFO ] 2019-05-23T07:15:30,847Z [flow-worker] corda.flow.call - All parties received the transaction successfully. {actor_id=internalShell, actor_owning_identity=O=PartyA, L=Osaka, C=JP, actor_store_id=NODE_CONFIG, fiber-id=10000002, flow-id=b6aa33d8-307a-4338-b8bc-9ce8e3cf1962, invocation_id=2a2f2483-1a24-4702-8d0f-9f02072f27ab, invocation_timestamp=2019-05-23T07:15:22.263Z, origin=internalShell, session_id=e4fe6f15-d925-42e5-8510-30d00af8a86f, session_timestamp=2019-05-23T04:28:05.971Z, thread-id=912, tx_id=DC1B4FBE9E052302EFDB733086C651C7615EDFFE6D37BD23450E45D9BB530D16}
// note: the tx_id and msg in line 2 and 7 show that PartyA processed txn-b with PartyB

Below is the Txn-b log for Party B Cold Node. This indicates that PartyB (Cold) received txn-b from PartyA and processed it normally.

[INFO ] 2019-05-23T07:15:39,557Z [flow-worker] corda.flow.call - Transaction dependencies resolution completed. {fiber-id=10000001, flow-id=7ee8d76d-25f1-439a-9f43-45cffc8c10e1, invocation_id=86bf5cb8-4442-4de4-93ef-075e6eba0877, invocation_timestamp=2019-05-23T07:15:38.233Z, origin=O=PartyA, L=Osaka, C=JP, session_id=86bf5cb8-4442-4de4-93ef-075e6eba0877, session_timestamp=2019-05-23T07:15:38.233Z, thread-id=653, tx_id=DC1B4FBE9E052302EFDB733086C651C7615EDFFE6D37BD23450E45D9BB530D16}
[INFO ] 2019-05-23T07:15:46,418Z [flow-worker] corda.flow.call - Received transaction acknowledgement request from party O=PartyA, L=Osaka, C=JP. {fiber-id=10000001, flow-id=7ee8d76d-25f1-439a-9f43-45cffc8c10e1, invocation_id=86bf5cb8-4442-4de4-93ef-075e6eba0877, invocation_timestamp=2019-05-23T07:15:38.233Z, origin=O=PartyA, L=Osaka, C=JP, session_id=86bf5cb8-4442-4de4-93ef-075e6eba0877, session_timestamp=2019-05-23T07:15:38.233Z, thread-id=658, tx_id=DC1B4FBE9E052302EFDB733086C651C7615EDFFE6D37BD23450E45D9BB530D16}
[INFO ] 2019-05-23T07:15:46,459Z [flow-worker] corda.flow.call - Transaction dependencies resolution completed. {fiber-id=10000001, flow-id=7ee8d76d-25f1-439a-9f43-45cffc8c10e1, invocation_id=86bf5cb8-4442-4de4-93ef-075e6eba0877, invocation_timestamp=2019-05-23T07:15:38.233Z, origin=O=PartyA, L=Osaka, C=JP, session_id=86bf5cb8-4442-4de4-93ef-075e6eba0877, session_timestamp=2019-05-23T07:15:38.233Z, thread-id=661, tx_id=DC1B4FBE9E052302EFDB733086C651C7615EDFFE6D37BD23450E45D9BB530D16}
[INFO ] 2019-05-23T07:15:46,496Z [flow-worker] corda.flow.call - Successfully received fully signed tx. Sending it to the vault for processing. {fiber-id=10000001, flow-id=7ee8d76d-25f1-439a-9f43-45cffc8c10e1, invocation_id=86bf5cb8-4442-4de4-93ef-075e6eba0877, invocation_timestamp=2019-05-23T07:15:38.233Z, origin=O=PartyA, L=Osaka, C=JP, session_id=86bf5cb8-4442-4de4-93ef-075e6eba0877, session_timestamp=2019-05-23T07:15:38.233Z, thread-id=661, tx_id=DC1B4FBE9E052302EFDB733086C651C7615EDFFE6D37BD23450E45D9BB530D16}
[INFO ] 2019-05-23T07:15:46,626Z [flow-worker] corda.flow.call - Successfully recorded received transaction locally. {fiber-id=10000001, flow-id=7ee8d76d-25f1-439a-9f43-45cffc8c10e1, invocation_id=86bf5cb8-4442-4de4-93ef-075e6eba0877, invocation_timestamp=2019-05-23T07:15:38.233Z, origin=O=PartyA, L=Osaka, C=JP, session_id=86bf5cb8-4442-4de4-93ef-075e6eba0877, session_timestamp=2019-05-23T07:15:38.233Z, thread-id=661, tx_id=DC1B4FBE9E052302EFDB733086C651C7615EDFFE6D37BD23450E45D9BB530D16}
// note: the tx_id and msg in line 1 and 5 show that this PartyB (cold) node processed txn-b with the PartyA

When the stopped PartyB (hot) is restarted, if PartyB (cold) is in the active state, the Corda process will not start and will be in a waiting state. Therefore, if you issue a third transaction (txn-c), txn-c will be processed normally by PartyB (cold).

General comment

Even if a failure occurs on the active node, the inactive node will start automatically and will wait for a while while it starts, but transactions can be processed without interruption, so high availability can be guaranteed. However, this is only node high availability. From a system-wide perspective, nodes, load balancers, network shared drives, databases, all are the resources needed by a Corda node, and if any one of them fails, it will not function as a Corda node. High availability of the entire system cannot be guaranteed unless an HA cluster configuration is built for all required resources.

Summary

This article described the steps and precautions required to build an HA Corda node in an AWS environment.

The latest Corda Enterprise version (CE4.0) offers an alternative approach to configuring HA nodes without a load balancer. In the next article, I will publish how to set the function.

Note: TIS Blockchain Promotion Office (Ra)

Thanks to Kiyotaka Yamasaki.


This article is a reprint of the Medium article by TIS Co., Ltd. (approved). Body URL: https://medium.com/@TIS_BC_Prom/r3-corda%E3%83%8E%E3%83%BC%E3%83%89%E3%81%AE%E9%AB%98%E5% 8F% AF% E7% 94% A8% E6% 80% A7-ha-% E3% 81% A8% E3% 81% 9D% E3% 81% AE% E6% A7% 8B% E6% 88% 90% E6 % 96% B9% E6% B3% 95-34cb5dd409d1

Inquiries about this article: SBI R3 Japan [email protected]

Recommended Posts

R3 Corda node high availability (HA) and how to configure it
How to configure high availability (HA) without the need for a R3 Corda 3rd party load balancer
How to install and configure the monitoring tool "Graphite" on Ubuntu
How to read a file and treat it as standard input
How to use StringBurrer and Arrays.toString.
How to use EventBus3 and ThreadMode
How to call classes and methods
How to use equality and equality (how to use equals)
How to connect Heroku and Sequel
How to convert LocalDate and Timestamp