Google Kubernetes Engine (GKE)
This topic modernizes a legacy standalone Java application into a containerized microservice that runs on Google Kubernetes Engine (GKE) and connects to Oracle Autonomous AI Database by using an mTLS wallet.
Prerequisites
This section describes the requirements of Oracle AI Database and tables for the Java application to connect to Oracle Autonomous AI Database (Serverless) and access the Product table.
Oracle Autonomous AI Database
- Oracle Autonomous AI Database Wallet for the connection.
- Oracle Database user credentials to create a database session and run SQL commands.
- Connectivity from the application server to Oracle AI Database.
- A Product table in Oracle AI Database.
-- Create the Product table
CREATE TABLE Product (
id NUMBER PRIMARY KEY,
name VARCHAR2(100) NOT NULL,
price NUMBER(10, 2) NOT NULL
);
-- Insert a quick test record (optional, so your UI isn't empty on first load)
INSERT INTO Product (id, name, price)
VALUES (1, 'Test Migration Item', 99.99);
-- Commit the transaction
COMMIT;Implementation
- Development Machine Setup
- Tools and Libraries: Install the following libraries and tools on the development machine:
- Java Development Kit (JDK): JDK 25 or higher.
- Oracle JDBC Driver: Download the standalone ojdbc17.jar.
- Rancher Desktop: Install and select the dockerd (moby) container engine during setup. This gives you the standard
dockerCLI command.- You can use other applications other application similar to Rancher Desktop like Docker Desktop, Podman Desktop, Colima, OrbStack.
- Google Cloud CLI (gcloud): To provision cloud resources.
- Kubernetes CLI (kubectl) and GKE Authentication Plugin: To interact with the GKE Cluster.
- The Java Source Code (
ProductApiApp.java)- Create the
ProductApiApp.javafile and copy the following content into it.import java.io.*; import java.net.InetSocketAddress; import java.sql.*; import com.sun.net.httpserver.*; public class ProductApiApp { // These environment variables are injected by the Kubernetes deployment.yaml private static final String DB_URL = System.getenv("DB_URL"); private static final String DB_USER = System.getenv("DB_USER"); private static final String DB_PASS = System.getenv("DB_PASS"); public static void main(String[] args) throws Exception { if (DB_URL == null || DB_USER == null || DB_PASS == null) { System.err.println("ERROR: Missing DB_URL, DB_USER, or DB_PASS"); System.exit(1); } // Bind to 0.0.0.0 (all interfaces) so the Kubernetes LoadBalancer can route traffic to it HttpServer server = HttpServer.create(new InetSocketAddress("0.0.0.0", 8080), 0); server.createContext("/api/products", new ProductApiHandler()); server.setExecutor(null); server.start(); System.out.println("API Microservice running on port 8080..."); System.out.println("Connecting to database using URL: " + DB_URL); } static class ProductApiHandler implements HttpHandler { @Override public void handle(HttpExchange exchange) throws IOException { // Enable CORS so the UI microservice and remote callers can fetch data from this API exchange.getResponseHeaders().add("Access-Control-Allow-Origin", "*"); exchange.getResponseHeaders().add("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS"); exchange.getResponseHeaders().add("Access-Control-Allow-Headers", "Content-Type"); // Handle preflight requests for CORS if ("OPTIONS".equalsIgnoreCase(exchange.getRequestMethod())) { exchange.sendResponseHeaders(204, -1); return; } exchange.getResponseHeaders().add("Content-Type", "application/json"); String method = exchange.getRequestMethod(); StringBuilder jsonResponse = new StringBuilder(); try (Connection conn = DriverManager.getConnection(DB_URL, DB_USER, DB_PASS)) { if ("GET".equalsIgnoreCase(method)) { // READ: List all products jsonResponse.append("["); try (Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery("SELECT id, name, price FROM Product ORDER BY id")) { boolean first = true; while (rs.next()) { if (!first) jsonResponse.append(","); jsonResponse.append("{") .append("\"id\":").append(rs.getInt("id")).append(",") .append("\"name\":\"").append(rs.getString("name")).append("\",") .append("\"price\":").append(rs.getDouble("price")) .append("}"); first = false; } } jsonResponse.append("]"); } else if ("POST".equalsIgnoreCase(method) || "PUT".equalsIgnoreCase(method) || "DELETE".equalsIgnoreCase(method)) { // Read the ENTIRE request payload (handles multi-line pretty JSON from Postman) InputStreamReader isr = new InputStreamReader(exchange.getRequestBody(), "utf-8"); StringBuilder payloadBuilder = new StringBuilder(); int b; while ((b = isr.read()) != -1) { payloadBuilder.append((char) b); } String payload = payloadBuilder.toString(); String idStr = extractJsonValue(payload, "id"); String name = extractJsonValue(payload, "name"); String priceStr = extractJsonValue(payload, "price"); int id = idStr.isEmpty() ? 0 : Integer.parseInt(idStr); double price = priceStr.isEmpty() ? 0.0 : Double.parseDouble(priceStr); if ("POST".equalsIgnoreCase(method)) { // CREATE String sql = "INSERT INTO Product (id, name, price) VALUES (?, ?, ?)"; try (PreparedStatement pstmt = conn.prepareStatement(sql)) { pstmt.setInt(1, id); pstmt.setString(2, name); pstmt.setDouble(3, price); pstmt.executeUpdate(); } jsonResponse.append("{\"status\": \"Product created successfully\"}"); } else if ("PUT".equalsIgnoreCase(method)) { // UPDATE String sql = "UPDATE Product SET name=?, price=? WHERE id=?"; try (PreparedStatement pstmt = conn.prepareStatement(sql)) { pstmt.setString(1, name); pstmt.setDouble(2, price); pstmt.setInt(3, id); pstmt.executeUpdate(); } jsonResponse.append("{\"status\": \"Product updated successfully\"}"); } else if ("DELETE".equalsIgnoreCase(method)) { // DELETE String sql = "DELETE FROM Product WHERE id=?"; try (PreparedStatement pstmt = conn.prepareStatement(sql)) { pstmt.setInt(1, id); pstmt.executeUpdate(); } jsonResponse.append("{\"status\": \"Product deleted successfully\"}"); } } byte[] responseBytes = jsonResponse.toString().getBytes("UTF-8"); exchange.sendResponseHeaders(200, responseBytes.length); OutputStream os = exchange.getResponseBody(); os.write(responseBytes); os.close(); } catch (SQLException e) { String errorJson = "{\"error\":\"" + e.getMessage().replace("\"", "\\\"") + "\"}"; byte[] responseBytes = errorJson.getBytes("UTF-8"); exchange.sendResponseHeaders(500, responseBytes.length); OutputStream os = exchange.getResponseBody(); os.write(responseBytes); os.close(); } } // Lightweight JSON parser helper for zero-dependency constraint // Updated to handle arbitrary spaces and multi-line structures private String extractJsonValue(String json, String key) { if (json == null) return ""; String searchKey = "\"" + key + "\""; int start = json.indexOf(searchKey); if (start == -1) return ""; start = json.indexOf(":", start) + 1; int end = json.indexOf(",", start); if (end == -1) end = json.indexOf("}", start); if (end == -1) end = json.length(); return json.substring(start, end).replace("\"", "").trim(); } } }
- Create the
- The Containerization (
Dockerfile)- Create a file named
Dockerfilein the same directory as your Java code and theojdbc17.jarfile. Compile the code inside the container to avoid installing build dependencies on the local machine.# Use Eclipse Temurin base image for Java FROM eclipse-temurin:25-jdk-jammy WORKDIR /app COPY ProductApiApp.java /app/ COPY ojdbc17.jar /app/ # Compile the Java application RUN javac -cp ojdbc17.jar ProductApiApp.java EXPOSE 8080 CMD ["java", "-cp", ".:ojdbc17.jar", "ProductApiApp"]
- Create a file named
- Tools and Libraries: Install the following libraries and tools on the development machine:
- Deployment Environment - Google Kubernetes Engine (GKE)Open PowerShell, Command Prompt, or Zsh, and then sign in to Google Cloud:
gcloud auth login- Provision Google Kubernetes Engine and Container Registry
- Define the variables, and then create the Google Artifact Registry (GAR), and Google Kubernetes Engine (GKE) Cluster.
PROJECT_ID="your-gcp-project-id" # REPLACE with your actual GCP Project ID REGION="us-central1" REPO_NAME="mycompanyrepo123" CLUSTER_NAME="oracle-gke-cluster" # 1. Set the active project gcloud config set project $PROJECT_ID # 2. Enable Required APIs (Artifact Registry and Kubernetes Engine) gcloud services enable artifactregistry.googleapis.com container.googleapis.com # 3. Create Google Artifact Registry (GAR) for Docker images gcloud artifacts repositories create $REPO_NAME \ --repository-format=docker \ --location=$REGION \ --description="Docker repository for Oracle microservices" # 4. Create GKE Cluster (Standard, 1 node for testing) gcloud container clusters create $CLUSTER_NAME \ --region=$REGION \ --num-nodes=1 # 5. Get kubectl credentials to connect to your new cluster gcloud container clusters get-credentials $CLUSTER_NAME --region=$REGION
- Define the variables, and then create the Google Artifact Registry (GAR), and Google Kubernetes Engine (GKE) Cluster.
- Build and Push the Container (Using Rancher Desktop)
- Ensure that Rancher Desktop is running, and then run the following commands.
# 1. Configure local Docker/Rancher CLI to authenticate with Google Artifact Registry gcloud auth configure-docker $REGION-docker.pkg.dev # 2. Define your full image path IMAGE_PATH="$REGION-docker.pkg.dev/$PROJECT_ID/$REPO_NAME/product-api:v1" # 3. Build the image locally (Enforce AMD64 architecture for cloud compatibility) docker build --platform linux/amd64 -t $IMAGE_PATH . # 4. Push the image to Google Cloud docker push $IMAGE_PATH
- Ensure that Rancher Desktop is running, and then run the following commands.
- Configure Oracle Wallet and Database Secrets
- Autonomous AI Database (Serverless) uses an mTLS wallet. Download the instance wallet zip file from the OCI Console and then extract it to your local folder. For example,
./adb-wallet.# 1. Upload the Wallet files into Kubernetes as a Secret kubectl create secret generic adb-wallet \ --from-file=./adb-wallet/cwallet.sso \ --from-file=./adb-wallet/tnsnames.ora \ --from-file=./adb-wallet/sqlnet.ora # 2. Upload your Database Credentials as a Secret kubectl create secret generic db-credentials \ --from-literal=username="ADMIN" \ --from-literal=password="<Your_ADB_Password123!>"
- Autonomous AI Database (Serverless) uses an mTLS wallet. Download the instance wallet zip file from the OCI Console and then extract it to your local folder. For example,
- Deploy to GKE
- Create a file named
deployment.yaml. TheDB_URLvalue uses the Oracle TNS alias found in the tnsnames.ora file and points to the wallet directory (/app/wallet) that Kubernetes mounts.- Update
<your-gcp-project-id>inside theimage:attribute below to match your actual project ID. - Replace
my_adb_highwith the actual TNS name.
apiVersion: apps/v1 kind: Deployment metadata: name: product-api spec: replicas: 2 selector: matchLabels: app: product-api template: metadata: labels: app: product-api spec: containers: - name: api # UPDATE THIS image string with your actual project ID image: us-central1-docker.pkg.dev/<your-gcp-project-id>/mycompanyrepo123/product-api:v1 imagePullPolicy: Always # Forces K8s to download the newest image from GAR ports: - containerPort: 8080 # THIS IS WHERE THE ENVIRONMENT VARIABLES ARE SET FOR JAVA env: - name: DB_URL # The '?TNS_ADMIN=/app/wallet' parameter tells the JDBC driver where to look for the wallet. value: "jdbc:oracle:thin:@my_adb_high?TNS_ADMIN=/app/wallet" - name: DB_USER valueFrom: secretKeyRef: name: db-credentials key: username - name: DB_PASS valueFrom: secretKeyRef: name: db-credentials key: password volumeMounts: - name: wallet-volume mountPath: /app/wallet readOnly: true volumes: - name: wallet-volume secret: secretName: adb-wallet --- apiVersion: v1 kind: Service metadata: name: api-service spec: type: LoadBalancer ports: - port: 80 targetPort: 8080 selector: app: product-api - Update
- Create a file named
- Deploy the ApplicationOnce the
EXTERNAL-IPappears, your API is fully accessible over the internet.kubectl apply -f deployment.yaml # Monitor the deployment until an EXTERNAL-IP is assigned kubectl get services --watch
- Provision Google Kubernetes Engine and Container Registry
- Interacting with the API
Now that API is separated from the UI and deployed on GKE, you can interact with it using any REST client or a decoupled frontend.
- Accessing Your Running Application
Once the
EXTERNAL-IPappears, for example,20.124.x.x, the API is accessible over the internet.Even though your Java application is
EXPOSE 8080in the Dockerfile, the Kubernetes Service (api-servicedefined above) maps the standard web port 80 to the container's port 8080. Therefore, you do not need to specify a port in your URL.Access URL Format:
http://<EXTERNAL-IP>/api/products- Test it from your terminal:
curl http://<EXTERNAL-IP>/api/products
- Test it from your terminal:
- Enable OpenAPI Specifications (Optional)
You can use
openai.yamlin Postman or another REST client to interact with the graphical user interface.- Save the following content as
openai.yaml. Import the file into Postman, and replace<your-GKE-api-external-ip>with the IP address from the previous step. The AI uses this schema to automatically generate valid JSON payloads and fetch current Oracle data.openapi: 3.0.0 info: title: Oracle ADB-S Product API version: 1.0.0 description: Full CRUD API to perform operations on the Product table. servers: - url: http://<your-gke-api-external-ip> paths: /api/products: get: summary: Read all products operationId: getProducts responses: '200': description: A JSON array of products content: application/json: schema: type: array items: $ref: '#/components/schemas/Product' post: summary: Create a new product operationId: createProduct requestBody: required: true content: application/json: schema: $ref: '#/components/schemas/Product' responses: '200': description: Product created successfully put: summary: Update an existing product operationId: updateProduct requestBody: required: true content: application/json: schema: $ref: '#/components/schemas/Product' responses: '200': description: Product updated successfully delete: summary: Delete a product operationId: deleteProduct requestBody: required: true content: application/json: schema: type: object properties: id: type: integer responses: '200': description: Product deleted successfully components: schemas: Product: type: object properties: id: type: integer name: type: string price: type: number
- Save the following content as
- Accessing Your Running Application
- Cleanup
After you finish testing, delete the cloud resources to avoid Google Cloud compute costs.
- Run the following command for cleaning up the resource.
# 1. Delete the GKE Cluster gcloud container clusters delete $CLUSTER_NAME --region=$REGION --quiet # 2. Delete the Artifact Registry repository gcloud artifacts repositories delete $REPO_NAME --location=$REGION --quiet # Optional: Remove the local kubectl context kubectl config delete-context gke_${PROJECT_ID}_${REGION}_${CLUSTER_NAME}
- Run the following command for cleaning up the resource.