Adding Secondary Zone to the Existing Topology
This section shows how to add a secondary zone to the existing topology that was created in the "Configuring with Multiple Zones" section. The following example adds a secondary zone in a different geographical location, Europe, allowing the users to read the data from the secondary zone either because it is physically located closer to the client or because the primary zone in the New York metro area is unavailable due to a disaster. The steps involve creating and starting two new Storage Nodes with capacity 1, creating a secondary zone, deploying the new Storage Nodes in the secondary zone, and doing a redistribute of the topology so that a replica for each shard is placed in the secondary zone.
- Create the initial makebootconfig for the first new Storage Node that will be
deployed in the Frankfurt
zone.
java -Xmx64m -Xms64m \ -jar KVHOME/lib/kvstore.jar makebootconfig \ -root Data/virtualroot/datacenter7/KVROOT \ -host localhost \ -port 5700 \ -harange 5710,5720 \ -capacity 1
- Copy the security directory to the new Storage Node.
cp -r Data/virtualroot/datacenter1/KVROOT/security \ Data/virtualroot/datacenter7/KVROOT/
- Start the 7th Storage Node Agent.
java -Xmx64m -Xms64m \ -jar KVHOME/lib/kvstore.jar start \ -root Data/virtualroot/datacenter7/KVROOT &
- Create the second new Storage Node that will be deployed in the Frankfurt
zone.
java -Xmx64m -Xms64m \ -jar KVHOME/lib/kvstore.jar makebootconfig \ -root Data/virtualroot/datacenter8/KVROOT \ -host localhost \ -port 5800 \ -harange 5810,5820 \ -capacity 1
- Copy the security directory to the new Storage
Node.
cp -r Data/virtualroot/datacenter1/KVROOT/security \ Data/virtualroot/datacenter8/KVROOT/
- Start the 8th Storage Node
Agent.
java -Xmx64m -Xms64m \ -jar KVHOME/lib/kvstore.jar start \ -root Data/virtualroot/datacenter8/KVROOT &
- Start the Admin CLI.
java -Xmx64m -Xms64m \ -jar KVHOME/lib/kvstore.jar runadmin \ -host localhost -port 5100 \ -security Data/virtualroot/datacenter1/KVROOT/security/client.security
- Create a secondary zone in Frankfurt.
kv-> plan deploy-zone -name Frankfurt -rf 1 -type secondary -wait Executed plan 14, waiting for completion... Plan 14 ended successfully
- Deploy Storage Node sn7 in the Frankfurt
zone.
kv-> plan deploy-sn -znname Frankfurt -host localhost -port 5700 -wait Executed plan 15, waiting for completion... Plan 15 ended successfully
- Deploy the Storage Node sn7 with administration process in the Frankfurt
zone.
kv-> plan deploy-admin -sn sn7 -wait Executed plan 16, waiting for completion... Plan 16 ended successfully kv-> pool join -name SNs -sn sn7 Added Storage Node(s) [sn7] to pool SNs
- Deploy Storage Node sn8 in the Frankfurt
zone.
kv-> plan deploy-sn -znname Frankfurt -host localhost -port 5800 -wait Executed plan 17, waiting for completion... Plan 17 ended successfully kv-> pool join -name SNs -sn sn8 Added Storage Node(s) [sn8] to pool SNs
- Do redistribute and then deploy the new topology to create one replica for every
shard in the secondary Frankfurt
zone.
kv-> topology clone -current -name topo_secondary Created topo_secondary kv-> topology redistribute -name topo_secondary -pool SNs Redistributed: topo_secondary kv-> topology preview -name topo_secondary Topology transformation from current deployed topology to topo_secondary: Create 2 RN shard rg1 1 new RN : rg1-rn4 shard rg2 1 new RN : rg2-rn4 kv-> plan deploy-topology -name topo_secondary -wait Executed plan 19, waiting for completion... Plan 19 ended successfully
- Check service status with the show topology
command.
kv-> show topology store=MetroArea numPartitions=100 sequence=120 zn: id=zn1 name=Manhattan repFactor=1 type=PRIMARY allowArbiters=false masterAffinity=false zn: id=zn2 name=JerseyCity repFactor=1 type=PRIMARY allowArbiters=false masterAffinity=false zn: id=zn3 name=Queens repFactor=1 type=PRIMARY allowArbiters=false masterAffinity=false zn: id=zn4 name=Frankfurt repFactor=1 type=SECONDARY allowArbiters=false masterAffinity=false sn=[sn1] zn:[id=zn1 name=Manhattan] node01:5100 capacity=1 RUNNING [rg1-rn1] RUNNING single-op avg latency=0.21372496 ms multi-op avg latency=0.0 ms sn=[sn2] zn:[id=zn1 name=Manhattan] node02:5200 capacity=1 RUNNING [rg2-rn1] RUNNING single-op avg latency=0.30840763 ms multi-op avg latency=0.0 ms sn=[sn3] zn:[id=zn2 name=JerseyCity] node03:5300 capacity=1 RUNNING [rg1-rn2] RUNNING No performance info available sn=[sn4] zn:[id=zn2 name=JerseyCity] node04:5400 capacity=1 RUNNING [rg2-rn2] RUNNING No performance info available sn=[sn5] zn:[id=zn3 name=Queens] node05:5500 capacity=1 RUNNING [rg1-rn3] RUNNING No performance info available sn=[sn6] zn:[id=zn3 name=Queens] node06:5600 capacity=1 RUNNING [rg2-rn3] RUNNING No performance info available sn=[sn7] zn:[id=zn4 name=Frankfurt] node07:5700 capacity=1 RUNNING [rg1-rn4] RUNNING No performance info available sn=[sn8] zn:[id=zn4 name=Frankfurt] node07:5800 capacity=1 RUNNING [rg2-rn4] RUNNING No performance info available numShards=2 shard=[rg1] num partitions=50 [rg1-rn1] sn=sn1 [rg1-rn2] sn=sn3 [rg1-rn3] sn=sn5 [rg1-rn4] sn=sn7 shard=[rg2] num partitions=50 [rg2-rn1] sn=sn2 [rg2-rn2] sn=sn4 [rg2-rn3] sn=sn6 [rg2-rn4] sn=sn8
- Verify that the secondary zone has a replica for each shard.
kv-> verify config Verify: starting verification of store MetroArea based upon topology sequence #120 100 partitions and 7 storage nodes Time: 2022-07-30 18:00:19 UTC Version: 21.2.16 See node01: Data/virtualroot/datacenter1/KVROOT/MetroArea/log/MetroArea_{0..N}.log for progress messages Verify: Shard Status: healthy:2 writable-degraded:0 read-only:0 offline:0 total:2 Verify: Admin Status: healthy Verify: Zone [name=Manhattan id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false] RN Status: online:2 read-only:0 offline:0 Verify: Zone [name=JerseyCity id=zn2 type=PRIMARY allowArbiters=false masterAffinity=false] RN Status: online:2 read-only:0 offline:0 maxDelayMillis:1 maxCatchupTimeSecs:0 Verify: Zone [name=Queens id=zn3 type=PRIMARY allowArbiters=false masterAffinity=false] RN Status: online:2 read-only:0 offline:0 maxDelayMillis:1 maxCatchupTimeSecs:0 Verify: Zone [name=Frankfurt id=zn4 type=SECONDARY allowArbiters=false masterAffinity=false] RN Status: online:1 read-only:0 offline:0 maxDelayMillis:1 maxCatchupTimeSecs:0 Verify: == checking storage node sn1 == Verify: Storage Node [sn1] on node01:5100 Zone: [name=Manhattan id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 21.2.16 2022-07-24 09:50:01 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Admin [admin1] Status: RUNNING,MASTER Verify: Rep Node [rg1-rn1] Status: RUNNING,MASTER sequenceNumber:1,261 haPort:5111 available storage size:31 GB Verify: == checking storage node sn2 == Verify: Storage Node [sn2] on node02:5200 Zone: [name=Manhattan id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 21.2.16 2022-07-24 09:50:01 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Rep Node [rg2-rn1] Status: RUNNING,MASTER sequenceNumber:1,236 haPort:5210 available storage size:31 GB Verify: == checking storage node sn3 == Verify: Storage Node [sn3] on node03:5300 Zone: [name=JerseyCity id=zn2 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 21.2.16 2022-07-24 09:50:01 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Admin [admin2] Status: RUNNING,REPLICA Verify: Rep Node [rg1-rn2] Status: RUNNING,REPLICA sequenceNumber:1,261 haPort:5311 available storage size:31 GB delayMillis:0 catchupTimeSecs:0 Verify: == checking storage node sn4 == Verify: Storage Node [sn4] on node04:5400 Zone: [name=JerseyCity id=zn2 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 21.2.16 2022-07-24 09:50:01 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Rep Node [rg2-rn2] Status: RUNNING,REPLICA sequenceNumber:1,236 haPort:5410 available storage size:31 GB delayMillis:1 catchupTimeSecs:0 Verify: == checking storage node sn5 == Verify: Storage Node [sn5] on node05:5500 Zone: [name=Queens id=zn3 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 21.2.16 2022-07-24 09:50:01 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Admin [admin3] Status: RUNNING,REPLICA Verify: Rep Node [rg1-rn3] Status: RUNNING,REPLICA sequenceNumber:1,261 haPort:5511 available storage size:31 GB delayMillis:1 catchupTimeSecs:0 Verify: == checking storage node sn6 == Verify: Storage Node [sn6] on node06:5600 Zone: [name=Queens id=zn3 type=PRIMARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 21.2.16 2022-07-24 09:50:01 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Rep Node [rg2-rn3] Status: RUNNING,REPLICA sequenceNumber:1,236 haPort:5610 available storage size:31 GB delayMillis:0 catchupTimeSecs:0 Verify: == checking storage node sn7 == Verify: Storage Node [sn7] on node07:5700 Zone: [name=Frankfurt id=zn4 type=SECONDARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 21.2.16 2022-07-24 09:50:01 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Admin [admin4] Status: RUNNING,REPLICA Verify: Rep Node [rg1-rn4] Status: RUNNING,REPLICA sequenceNumber:1,261 haPort:5710 available storage size:31 GB delayMillis:1 catchupTimeSecs:0 Verify: == checking storage node sn8 == Verify: Storage Node [sn8] on node08:5800 Zone: [name=Frankfurt id=zn4 type=SECONDARY allowArbiters=false masterAffinity=false] Status: RUNNING Ver: 21.2.16 2022-07-24 09:50:01 UTC Build id: c8998e4a8aa5 Edition: Enterprise Verify: Rep Node [rg2-rn4] Status: RUNNING,REPLICA sequenceNumber:1,238 haPort:5810 available storage size:31 GB delayMillis:0 catchupTimeSecs:0 Verification complete, no violations.