Adding Secondary Zone to the Existing Topology
This section shows how to add a secondary zone to an existing topology that was created in Configuring with Multiple Zones . The following example adds a secondary zone in a different geographical location, Europe, allowing the users to read the data from the secondary zone either because it is physically located closer to the client or because the primary zone in the New York metro area is unavailable due to a disaster. The steps involve creating and starting two new Storage Nodes with capacity 1, creating a secondary zone, deploying the new Storage Nodes in the secondary zone, and doing a redistribute of the topology so that a replica for each shard is placed in the secondary zone.
node07
and
node08
).
- Copy the security zipped files from the first node and unzip the
files.
unzip -o security.zip -d /;
- Invoke the
makebootconfig
utility for the first new Storage Node that will be deployed in the Frankfurt zone. The security configuration will be enabled while invoking themakebootconfig
utility.java -jar $KVHOME/lib/kvstore.jar makebootconfig \ -root $KVROOT \ -port 5000 \ -host $KVHOST \ -harange 5010,5020 \ -store-security enable \ -capacity 1 \ -storagedir ${KVDATA}/disk1 \ -storagedirsize 5500-MB
- Start the Storage Node Agent.
java -jar $KVHOME/lib/kvstore.jar start -root $KVROOT &
To create a secondary zone and deploy the new Storage Nodes, do the following steps:
- Start the Admin CLI. Here $KVHOST is
node01
.java -Xmx64m -Xms64m \ -jar $KVHOME/lib/kvstore.jar runadmin \ -port 5000 -host $KVHOST -security $KVROOT/security/client.security
- Create a secondary zone in Frankfurt.
kv-> plan deploy-zone -name Frankfurt -rf 1 -type secondary -wait Executed plan 14, waiting for completion... Plan 14 ended successfully
- Deploy Storage Node sn7 in the Frankfurt
zone.
kv-> plan deploy-sn -znname Frankfurt -host node07 -port 5000 -wait Executed plan 15, waiting for completion... Plan 15 ended successfully
- Deploy the Storage Node sn7 with administration process in the Frankfurt
zone.
kv-> plan deploy-admin -sn sn7 -wait Executed plan 16, waiting for completion... Plan 16 ended successfully
- Deploy Storage Node sn8 in the Frankfurt
zone.
kv-> plan deploy-sn -znname Frankfurt -host node08 -port 5000 -wait Executed plan 17, waiting for completion... Plan 17 ended successfully
- Do redistribute and then deploy the new topology to create one replica for every
shard in the secondary Frankfurt
zone.
kv-> topology clone -current -name topo_secondary Created topo_secondary kv-> topology redistribute -name topo_secondary -pool AllStorageNodes Redistributed: topo_secondary kv-> topology preview -name topo_secondary Topology transformation from current deployed topology to topo_secondary: Create 2 RN shard rg1 1 new RN : rg1-rn4 shard rg2 1 new RN : rg2-rn4 kv-> plan deploy-topology -name topo_secondary -wait Executed plan 19, waiting for completion... Plan 19 ended successfully
- Follow the instructions mentioned in Create users and configure security with remote access to copy user security files in the new Storage Nodes created.
- Check service status with the show topology
command.
kv-> show topology store=MetroArea numPartitions=100 sequence=120 zn: id=zn1 name=Manhattan repFactor=1 type=PRIMARY allowArbiters=false masterAffinity=false zn: id=zn2 name=JerseyCity repFactor=1 type=PRIMARY allowArbiters=false masterAffinity=false zn: id=zn3 name=Queens repFactor=1 type=PRIMARY allowArbiters=false masterAffinity=false zn: id=zn4 name=Frankfurt repFactor=1 type=SECONDARY allowArbiters=false masterAffinity=false sn=[sn1] zn:[id=zn1 name=Manhattan] node01:5000 capacity=1 RUNNING [rg1-rn1] RUNNING single-op avg latency=0.21372496 ms multi-op avg latency=0.0 ms sn=[sn2] zn:[id=zn1 name=Manhattan] node02:5000 capacity=1 RUNNING [rg2-rn1] RUNNING single-op avg latency=0.30840763 ms multi-op avg latency=0.0 ms sn=[sn3] zn:[id=zn2 name=JerseyCity] node03:5000 capacity=1 RUNNING [rg1-rn2] RUNNING No performance info available sn=[sn4] zn:[id=zn2 name=JerseyCity] node04:5000 capacity=1 RUNNING [rg2-rn2] RUNNING No performance info available sn=[sn5] zn:[id=zn3 name=Queens] node05:5000 capacity=1 RUNNING [rg1-rn3] RUNNING No performance info available sn=[sn6] zn:[id=zn3 name=Queens] node06:5000 capacity=1 RUNNING [rg2-rn3] RUNNING No performance info available sn=[sn7] zn:[id=zn4 name=Frankfurt] node07:5000 capacity=1 RUNNING [rg1-rn4] RUNNING No performance info available sn=[sn8] zn:[id=zn4 name=Frankfurt] node07:5000 capacity=1 RUNNING [rg2-rn4] RUNNING No performance info available numShards=2 shard=[rg1] num partitions=50 [rg1-rn1] sn=sn1 [rg1-rn2] sn=sn3 [rg1-rn3] sn=sn5 [rg1-rn4] sn=sn7 shard=[rg2] num partitions=50 [rg2-rn1] sn=sn2 [rg2-rn2] sn=sn4 [rg2-rn3] sn=sn6 [rg2-rn4] sn=sn8
- Verify that the secondary zone has a replica for each shard.
kv-> verify configuration Verify: starting verification of store MetroArea based upon topology sequence #120 100 partitions and 7 storage nodes Time: 2023-05-24 10:52:15 UTC Version: 23.1.21 See node01: $KVROOT/Disk1/MetroArea/log/MetroArea_{0..N}.log for progress messages Verify: Shard Status: healthy:2 writable-degraded:0 read-only:0 offline:0 total:2 Verify: Admin Status: healthy Verify: Zone [name=Manhattan id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false] RN Status: online:2 read-only:0 offline:0 Verify: Zone [name=JerseyCity id=zn2 type=PRIMARY allowArbiters=false masterAffinity=false] RN Status: online:2 read-only:0 offline:0 maxDelayMillis:1 maxCatchupTimeSecs:0 Verify: Zone [name=Queens id=zn3 type=PRIMARY allowArbiters=false masterAffinity=false] RN Status: online:2 read-only:0 offline:0 maxDelayMillis:1 maxCatchupTimeSecs:0 Verify: Zone [name=Frankfurt id=zn4 type=SECONDARY allowArbiters=false masterAffinity=false] RN Status: online:1 read-only:0 offline:0 maxDelayMillis:1 maxCatchupTimeSecs:0 Verify: == checking storage node sn1 == Verify: Storage Node [sn1] on node01:5000 Zone: [name=Manhattan id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 23.1.21 2023-05-24 10:52:15 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Admin [admin1] Status: RUNNING,MASTER Verify: Rep Node [rg1-rn1] Status: RUNNING,MASTER sequenceNumber:1,261 haPort:5011 available storage size:31 GB Verify: == checking storage node sn2 == Verify: Storage Node [sn2] on node02:5000 Zone: [name=Manhattan id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 23.1.21 2023-05-24 10:52:15 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Rep Node [rg2-rn1] Status: RUNNING,MASTER sequenceNumber:1,236 haPort:5012 available storage size:31 GB Verify: == checking storage node sn3 == Verify: Storage Node [sn3] on node03:5000 Zone: [name=JerseyCity id=zn2 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 23.1.21 2023-05-24 10:52:15 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Admin [admin2] Status: RUNNING,REPLICA Verify: Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:1,261 haPort:5011 available storage size:31 GB delayMillis:0 catchupTimeSecs:0 Verify: == checking storage node sn4 == Verify: Storage Node [sn4] on node04:5000 Zone: [name=JerseyCity id=zn2 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 23.1.21 2023-05-24 10:52:15 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Rep Node [rg2-rn2] Status: RUNNING,REPLICA sequenceNumber:1,236 haPort:5012 available storage size:31 GB delayMillis:1 catchupTimeSecs:0 Verify: == checking storage node sn5 == Verify: Storage Node [sn5] on node05:5000 Zone: [name=Queens id=zn3 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 23.1.21 2023-05-24 10:52:15 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Admin [admin3] Status: RUNNING,REPLICA Verify: Rep Node [rg1-rn3] Status: RUNNING,REPLICA sequenceNumber:1,261 haPort:5011 available storage size:31 GB delayMillis:1 catchupTimeSecs:0 Verify: == checking storage node sn6 == Verify: Storage Node [sn6] on node06:5000 Zone: [name=Queens id=zn3 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 23.1.21 2023-05-24 10:52:15 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Rep Node [rg2-rn3] Status: RUNNING,REPLICA sequenceNumber:1,236 haPort:5012 available storage size:31 GB delayMillis:0 catchupTimeSecs:0 Verify: == checking storage node sn7 == Verify: Storage Node [sn7] on node07:5000 Zone: [name=Frankfurt id=zn4 type=SECONDARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 23.1.21 2023-05-24 10:52:15 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Admin [admin4] Status: RUNNING,REPLICA Verify: Rep Node [rg1-rn4] Status: RUNNING,REPLICA sequenceNumber:1,261 haPort:5011 available storage size:31 GB delayMillis:1 catchupTimeSecs:0 Verify: == checking storage node sn8 == Verify: Storage Node [sn8] on node08:5000 Zone: [name=Frankfurt id=zn4 type=SECONDARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 23.1.21 2023-05-24 10:52:15 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Rep Node [rg2-rn4] Status: RUNNING,REPLICA sequenceNumber:1,238 haPort:5012 available storage size:31 GB delayMillis:0 catchupTimeSecs:0 Verification complete, no violations.