This article is a reprint of the Medium article by TIS Co., Ltd. (approved).
Body URL: https://medium.com/@TIS_BC_Prom/r3-corda-3rd-party%E8%A3%BD%E3%83%AD%E3%83%BC%E3%83%89%E3%83% 90% E3% 83% A9% E3% 83% B3% E3% 82% B5% E3% 82% 92% E5% BF% 85% E8% A6% 81% E3% 81% A8% E3% 81% 97% E3% 81% AA% E3% 81% 84% E9% AB% 98% E5% 8F% AF% E7% 94% A8% E6% 80% A7-ha-% E3% 81% AE% E6% A7% 8B % E6% 88% 90% E6% 96% B9% E6% B3% 95-3467f4479f6d
In my last post, I explained how to set up a "hot cold high availability deployment" using https://qiita.com/SBIR3Japan/items/0d6a3956613ec076381c (hereinafter referred to as "the above") and load balancer. The setting is also available in version 4.0. This article describes an alternative approach to configuring HA nodes that do not require the 3rd party load balancer provided by version 4.0. The verification environment in this article uses the same AWS environment as last time. However, it does not use the AWS Load Balancer service.
Figure 1 Overview of HA settings in this article
In my last post, I explained how to build AWS EFS and RDS services. Please build in the same way as above this time. (See sections D and E above)
Corda's official website briefly states that backup nodes for HA can be added with the [additionalP2PAddresses] setting. The detailed steps for this configuration are described in this chapter.
…<Abbreviation>…
node {
name "O=PartyB,L=Tokyo,C=JP"
p2pAddress "26.132.137.54:1433"
//The p2pAddress above is the IP of the hot Party B node. It is no longer necessary to set p2pAddress as the DNS name of the load balancer.
rpcSettings {
address("localhost:10009")
adminAddress("localhost:10049")
}
cordapps = [
"$project.group:cordapp-contracts-states:$project.version",
"$project.group:cordapp:$project.version"
]
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
}
…<Abbreviation>…
#-----PartyB/node.conf right after compile "./gradlew deployNodes"-----#
devMode=true
myLegalName="O=PartyB,L=Osaka,C=JP"
p2pAddress="26.132.137.54:1433"
rpcSettings {
address="localhost:10009"
adminAddress="localhost:10049"
}
security {
authService {
dataSource {
type=INMEMORY
users=[
{
password=test
permissions=[
ALL
]
user=user1
}
]
}
}
}
#-----------------------------------------------#
devMode=true
myLegalName="O=PartyB,L=Osaka,C=JP"
p2pAddress="26.132.137.54:1433" // this is the IP for the hot PartyB node
additionalP2PAddresses=["26.132.133.94:1433"] // this is the IP for the cold PartyB node
// the 3rd Party DB (e.g. PostgreSQL) info., which is the shared DB by PartyB’s Hot & Cold nodes
dataSourceProperties {
dataSource {
password=tisbcpoc
url="jdbc:postgresql://ce4-pgsql.*****.ap-northeast-1.rds.amazonaws.com:5432/HAPartyB"
user=ubuntu
}
dataSourceClassName="org.postgresql.ds.PGSimpleDataSource"
}
database {
runMigration="true"
schema="my_schema"
transactionIsolationLevel="READ_COMMITTED"
}
jarDirs=[
//Driver “postgresql-42.1.4.Directory where jar ”is located
"/home/ubuntu/driver"
]
rpcSettings {
address="localhost:10009"
adminAddress="localhost:10049"
}
security {
authService {
dataSource {
type=INMEMORY
users=[
{
password=test
permissions=[
ALL
]
user=user1
}
]
}
}
}
Since multiple P2PAddresses can be defined, they are defined by the array type [].
Now that you have added a new node (cold node) to your network, you will need to bootstrap to update the network parameter file. Before bootstrapping, you must first create a data migration file for PartyB in order to create the required schema for PostgreSQL DB.
Create the required "migration script" The explanation of the migration script and how to create it were introduced in Section I above. Follow the same procedure to create it.
Create NW construction of HA node (bootstrap) Execute the following command under the root directory of the project. java -jar corda-tools-network-bootstrapper-4.0.jar — dir build/nodes/ The "corda-tools-network-bootstrapper-4.0.jar" file used above is a copy from ~ / tools / network-bootstrapper / of the Corda-Enterprise-4.0 Evaluation Pack. If the bootstrap is successful, you will see a console message similar to the following:
Bootstrapping local test network in /mnt/**-poc-CE-additionalP2PAddresses/build/nodes
Generating node directory for PartyB
Generating node directory for Regulator
Generating node directory for Notary
Generating node directory for PartyA
Nodes found in the following sub-directories: [PartyA, PartyB, Notary, Regulator]
Found the following CorDapps: []
Not copying CorDapp JARs as --copy-cordapps is set to FirstRunOnly, and it looks like this network has already been bootstrapped.
Waiting for all nodes to generate their node-info files...
Distributing all node-info files to all nodes
Loading existing network parameters... NetworkParameters {
minimumPlatformVersion=4
notaries=[NotaryInfo(identity=O=Notary, L=London, C=GB, validating=true)]
maxMessageSize=10485760
maxTransactionSize=524288000
whitelistedContractImplementations {
}
eventHorizon=PT720H
packageOwnership {
}
modifiedTime=2019-06-25T10:37:15.047Z
epoch=1
}
Gathering notary identities
Generating contract implementations whitelist
Network parameters unchanged
Bootstrapping complete!
The Party B cold node information is now added to the new Corda network. After the bootstrap was complete, the PartyB node.conf file was also updated. The updated content is as follows.
devMode=true
myLegalName="O=PartyB,L=Osaka,C=JP"
p2pAddress="26.132.137.54:1433"
additionalP2PAddresses=["26.132.133.94:1433"]
dataSourceProperties {
dataSource {
password=tisbcpoc
url="jdbc:postgresql://ce4-pgsql.*****.ap-northeast-1.rds.amazonaws.com:5432/HAPartyB"
user=ubuntu
}
dataSourceClassName="org.postgresql.ds.PGSimpleDataSource"
}
database {
runMigration="true"
schema="my_schema"
transactionIsolationLevel="READ_COMMITTED"
}
jarDirs=[
"/home/ubuntu/driver"
]
rpcSettings {
address="localhost:10009"
adminAddress="localhost:10049"
}
security {
authService {
dataSource {
type=INMEMORY
users=[
{
password=test
permissions=[
ALL
]
user=user1
}
]
}
}
}
I was able to set the IP (additionalP2PAddresses) of the cold node of PartyB and the shared database "HAPartyB" (dataSourceProperties) of the hot node and the cold node. 6. Place ParyB setting information on hot node / cold node Copy the Party B directory created in Step 5 above to both the hot node "26.132.137.54" and the cold node "26.132.133.94". 7. Set up a shared drive for the HA node Refer to Section-H of the above and set up a shared drive (Artemis) for PartyB hot and cold nodes.
Place (copy) all the node directories other than Party B created in Step 5 on the corresponding node. Now all Corda nodes can successfully boot into your EC2 instance, as shown in Figure 1. For checking the operation of the HA function of PartyB, refer to Section-J of the above and verify. You can confirm that it can operate in the same way as above. Also, as with the above, it is necessary to start corda.jar for both the hot node and cold node of PartyB. When the cold node is started, if the hot node is running, it will be in the waiting state.
This article described an approach to configuring HA nodes without a Corda Enterprise version (CE4.0) load balancer. CE4.0 eliminates the need for the 3rd Party load balancer service and its settings, which were required in the past, and we believe that high availability of nodes can be guaranteed at a lower cost and easier. However, as mentioned in the previous article, it is just the HA configuration of the node. In actual operation, HA configuration is also required for DB and shared drive. The next article will cover Notary clusters. Note: TIS Blockchain Promotion Office (Ra) Thanks to Kiyotaka Yamasaki.
This article is a reprint of the Medium article by TIS Co., Ltd. (approved). Body URL: https://medium.com/@TIS_BC_Prom/r3-corda%E3%83%8E%E3%83%BC%E3%83%89%E3%81%AE%E9%AB%98%E5% 8F% AF% E7% 94% A8% E6% 80% A7-ha-% E3% 81% A8% E3% 81% 9D% E3% 81% AE% E6% A7% 8B% E6% 88% 90% E6 % 96% B9% E6% B3% 95-34cb5dd409d1
Inquiries about this article: SBI R3 Japan [email protected]
Recommended Posts