Scenario 3: Generation Heavy Benchmarks in Generative AI

The generation heavy scenario is for generation / model response heavy use cases. For example, a long job description generated from a short bullet list of items.

The generation heavy scenario is performed with the following token lengths:

  • The prompt length is fixed to 100 tokens
  • The response length is fixed to 1,000 tokens
Important

The performance (inference speed, throughput, latency) of a hosting dedicated AI cluster depends on the traffic scenarios going through the model that it's hosting. Traffic scenarios depend on:

  1. The number of concurrent requests.
  2. The number of tokens in the prompt.
  3. The number of tokens in the response.
  4. The variance of (2) and (3) across requests.

Review the terms used in the hosting dedicated AI cluster benchmarks. For a list of scenarios and their descriptions, see Chat and Text Generation Scenarios. The generation heavy scenario is performed in the following region.

Germany Central (Frankfurt)

Model: meta.llama-3-70b-instruct (Meta Llama 3) model hosted on one Large Generic unit of a dedicated AI cluster
Concurrency Token-level Inference Speed (token/second) Token-level Throughput (token/second) Request-level Latency (second) Request-level Throughput (Request per minute) (RPM)
1 50.18 50.14 20.43 2.94
2 49.28 97.61 20.78 5.72
4 48.22 186.82 21.32 10.94
8 47.20 365.89 21.75 21.43
16 44.69 650.22 22.89 38.03
32 37.29 989.98 27.31 58.04
64 29.53 1621.76 32.68 95.08
128 19.17 1784.76 53.14 104.56
256 10.79 2271.18 94.78 133.05
Model: cohere.command-r-16k v1.2 (Cohere Command R) model hosted on one Small Cohere V2 unit of a dedicated AI cluster
Concurrency Token-level Inference Speed (token/second) Token-level Throughput (token/second) Request-level Latency (second) Request-level Throughput (Request per minute) (RPM)
1 47.20 50.32 3.53 16.65
2 45.06 98.42 3.61 32.48
4 43.85 165.60 3.26 63.91
8 40.56 292.22 3.04 133.20
16 38.35 416.13 3.61 171.22
32 28.68 557.5 4.64 219.01
64 15.19 613.72 9.65 171.83
128 10.74 664.11 11.67 233.87
256 5.83 721.50 22.78 253.54

US Midwest (Chicago)

Model: meta.llama-3-70b-instruct (Meta Llama 3) model hosted on one Large Generic unit of a dedicated AI cluster
Concurrency Token-level Inference Speed (token/second) Token-level Throughput (token/second) Request-level Latency (second) Request-level Throughput (Request per minute) (RPM)
1 30.53 30.51 33.58 1.79
2 29.78 59.01 34.42 3.45
4 28.88 112.35 35.48 6.58
8 27.67 215.18 36.99 12.61
16 24.85 364.06 40.99 21.34
32 20.51 552.34 49.60 32.35
64 16.12 900.39 59.36 52.72
128 10.17 980.45 100.27 57.43
256 6.30 1334.59 162.08 78.19
Model: cohere.command-r-16k v1.2 (Cohere Command R) model hosted on one Small Cohere V2 unit of a dedicated AI cluster
Concurrency Token-level Inference Speed (token/second) Token-level Throughput (token/second) Request-level Latency (second) Request-level Throughput (Request per minute) (RPM)
1 47.20 50.32 3.53 16.65
2 45.06 98.42 3.61 32.48
4 43.85 165.60 3.26 63.91
8 40.56 292.22 3.04 133.20
16 38.35 416.13 3.61 171.22
32 28.68 557.5 4.64 219.01
64 15.19 613.72 9.65 171.83
128 10.74 664.11 11.67 233.87
256 5.83 721.50 22.78 253.54
Model: cohere.command (Cohere Command 52 B) model hosted on one Large Cohere unit of a dedicated AI cluster
Concurrency Token-level Inference Speed (token/second) Token-level Throughput (token/second) Request-level Latency (second) Request-level Throughput (Request per minute) (RPM)
1 35.78 33.43 10.98 5.33
8 31.41 99.67 13.87 16.61
32 28.49 237.1 19.48 40.24
128 23.01 326.93 53.13 54.89
Model: cohere.command-light (Cohere Command Light 6 B) model hosted on one Small Cohere unit of a dedicated AI cluster
Concurrency Token-level Inference Speed (token/second) Token-level Throughput (token/second) Request-level Latency (second) Request-level Throughput (Request per minute) (RPM)
1 80.38 83.61 9.19 6.34
8 45.96 278.91 13.89 22.46
32 23.90 493.78 27.34 41.13
128 5.12 565.06 82.15 44.89
Model: meta.llama-2-70b-chat (Llama2 70 B) model hosted on one Llama2 70 unit of a dedicated AI cluster
Concurrency Token-level Inference Speed (token/second) Token-level Throughput (token/second) Request-level Latency (second) Request-level Throughput (Request per minute) (RPM)
1 18.12 17.58 21.44 2.72
8 15.96 64.28 26.83 8.91
32 13.72 195.48 29.43 27.99
128 8.61 541.75 48.50 71.52