Polling Third-Party Endpoints with a Microservice
A lightweight, task-specific microservice can extend the capabilities of Conscia's low-code platform tremendously, and allow for a streamlined and clear business user experience despite complex programmed functions being executed in the workflow. This recipe demonstrates one such microservice.
While calls to DX Engine's Experience API allow for synchronous and asynchronous calls to web services, there are workflows that cannot be adequately engaged by a single synchronous call or asynchronous call. While strategies exist to fulfill this need using Conscia alone, it is typical for enterprises to implement a number of cloud-hosted serverless functions peripheral to their orchestration platform instance.
This recipe demonstrates a typical microservice interaction - a function called Relay. It requests a job be enqueued on another system, actively polls a job status endpoint, and then queries the job result once it has concluded. In this example, we will be sending a query to a specific OpenAI (ChatGPT) Assistant, and awaiting the response created by that LLM.
Additionally, we take advantage of Conscia's Experience Rules to capture the majority of the data fields required, so that business users can call upon "pre-fab" configuration combinations instead of concerning themselves with managing a complex state.
By following this pattern any necessary tools and capabilities, like real-time LLM content, can supplement and extend Conscia's capabilities.
Mapping Out The Recipe
When the frontend calls Conscia's Experience API, it will pass one Context field:
- The
relayConnection
value, which is the name of a Connection. This will provide the Relay Component both its actual connection details and the rules by which the outgoing payload is populated.
An example call looks like this:
POST {{engineUrl}}/experience/components/_query
X-Customer-Code: {{customerCode}}
Authorization: Bearer {{dxEngineToken}}
{
"componentCodes": ["relay"],
"context": {
"relayConnection": "relay-openai"
}
}
Based on the context provided, we will reach out to OpenAI with a pre-populated query ("Explain deep learning to a 5 year old."), and the LLM will respond:
{
"duration": 6308,
"components": {
"relay": {
"status": "VALID",
"response": "Imagine your brain is like a big team of tiny helpers who work together to figure things out. Deep learning is like teaching a computer to have its own team of tiny helpers, called neurons, that work together to learn new things. Just like you learn by looking at stuff and practicing, the computer looks at lots of examples, practices a lot, and gets smarter over time."
}
},
"errors": []
}
Microservice Configuration Details
For this recipe, we hosted the following Javascript application on Google Cloud, using Cloud Run. However, this is a "vanilla" JS application (using only common packages like Express and https) that can be hosted on any cloud or server.
The inputs to this application are detailed in the Relay Rules Component. To summarize the work performed, it will call apiUrl1 with apiBody1, headers, and an auth if provided via downstream-authorization. Once complete, it will substitute any response variables, then call the pollUrl every pollInterval until the response matches the doneRegex. Then, it will either GET apiUrl2, or POST apiUrl2 with apiBody2 depending on if apiBody2 was provided. The second API response is sent back to Conscia. Logging and error handling are present.
index.js
const express = require('express');
const axios = require('axios');
const https = require('https');
const logging = process.env.NODE_ENV !== 'production';
const log = (message) => {
if (logging) {
console.log(message);
}
};
// Create a new HTTPS agent
const agent = new https.Agent({
secureProtocol: 'TLSv1_2_method',
rejectUnauthorized: false // Disable SSL certificate verification
});
const app = express();
app.use(express.json());
// Endpoint to receive requests from your program
app.post('/start-job', async (req, res) => {
let { apiUrl1, apiBody1, headers, responseVariables, pollUrl, pollInterval, doneRegex, apiUrl2, apiBody2 } = req.body;
if (req.headers['downstream-authorization']) {
headers['authorization'] = req.headers['downstream-authorization'];
}
log("req.body: " + JSON.stringify(req.body));
log("headers: " + JSON.stringify(headers));
log("req.headers: " + JSON.stringify(req.headers));
try {
// Step 2: Make a request to API URL #1 to start a job
const response1 = await axios.post(apiUrl1, apiBody1, {
headers: { ...headers },
httpsAgent: agent
});
log("response1: " + response1);
// Step 3: Dynamically retrieve variables from the response
const context = {};
responseVariables.forEach(variable => {
context[variable] = response1.data[variable];
pollUrl = pollUrl.replace(`{!{${variable}}!}`, context[variable]);
apiUrl2 = apiUrl2.replace(`{!{${variable}}!}`, context[variable]);
});
log("context: " + context);
log("pollUrl: " + pollUrl);
let pollResponse;
const doneRegexObj = new RegExp(doneRegex);
let isJobDone = false;
while (!isJobDone) {
await new Promise(resolve => setTimeout(resolve, pollInterval));
pollResponse = await axios.get(pollUrl, { headers });
log("pollResponse: " + JSON.stringify(pollResponse.data) + ", " + JSON.stringify(pollResponse.status) + ", " + JSON.stringify(pollResponse.headers));
// Check the entire pollResponse for the key-value pair in doneRegex.
if (doneRegexObj.test(JSON.stringify(pollResponse.data))) {
isJobDone = true;
log("isJobDone: " + isJobDone);
}
}
// Step 4: Run API URL #2.
let apiResponse2;
if (apiBody2) {
apiResponse2 = await axios.post(apiUrl2, apiBody2, {
headers: { ...headers },
httpsAgent: agent
});
} else {
apiResponse2 = await axios.get(apiUrl2, {
headers: { ...headers },
httpsAgent: agent
});
}
log("apiResponse2: " + JSON.stringify(apiResponse2.data) + ", " + JSON.stringify(apiResponse2.status) + ", " + JSON.stringify(apiResponse2.headers));
res.status(apiResponse2.status).json(apiResponse2.data);
} catch (error) {
console.error('Error handling job request:', error);
// Log detailed error information
if (error.response) {
console.error('Response data:', error.response.data);
console.error('Response status:', error.response.status);
console.error('Response headers:', error.response.headers);
res.status(error.response.status).json({ error: error.response.data });
} else if (error.request) {
console.error('Request data:', error.request);
res.status(500).json({ error: 'No response received from the server' });
} else {
console.error('Error message:', error.message);
res.status(500).json({ error: 'An unexpected error occurred' });
}
}
});
// Start the server
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => {
log(`Microservice listening on port ${PORT}`);
});
package.json
{
"name": "relay",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"repository": {
"type": "git",
"url": "git+https://github.com/conscia/relay.git"
},
"keywords": [],
"author": "",
"license": "ISC",
"bugs": {
"url": "https://github.com/conscia/relay/issues"
},
"homepage": "https://github.com/conscia/relay#readme",
"dependencies": {
"axios": "^1.7.7",
"dotenv": "^16.4.5",
"express": "^4.21.0"
}
}
OpenAI Configuration Details
OpenAI Assistants are the API-only equivalent of Custom GPTs, which are web UI only. They cannot be accessed in a single-call Completion, instead being accessed via creation of a new threaded conversation. However, this allows us to unlock additional capabilties, such as adding additional back-and-forth to the dialog, redirecting a conversation to a more appropriate Assistant, and so on.
OpenAI API Key
On the OpenAI Platform API Keys page, create an API Key that allows for, at a minimum, "Assistants" and "Threads" write permissions.
OpenAI Assistant
On the OpenAI Platform Assistants Page, create an Assistant to receive the incoming request. The example today uses quite a simple and to-the-point Assistant:
Field | Value |
---|---|
Name | Succinct Jimmy |
System instructions | You're an educational service for extremely smart people. Give brief, technically-sound explanations. |
Model | gpt-4o |
Response format | text |
Temperature | 1 |
Tools | None |
Retain the API key and the Assistant ID you just generated. We will execute the remainder of the recipe in DX Engine.