Script Bundle Module
This module contains all the required information to add a Script Data Source to Kubling.
The following tree shows the recommended internal directory and file organization and is based on a real module that allows interaction with the JIRA API:
📄 bundle-script-info.yaml
This is the only file whose location must be the root directory of the module. See here the schema of the file.
📄 jira.ddl
This module contains its own version of the DDL file, which is the recommended practice to ensure self-contained modules. Due to its importance, we suggest placing the file in the root directory.
📂 module
directory
- 📂
platform
This directory contains scripts for interacting with all the platforms/systems supported by this module. Ideally, one module should support only one Data Source; however, in some circumstances, you may want to group multiple sources in a single module. In short, this directory includes one subdirectory per supported platform/system.
Delegates
Delegate scripts are the only scripts the Engine calls directly when the DQP routes a task to this Data Source. There are four of them (ResultSet, Insert, Update, and Delete), and implementing all of them is not mandatory. For example, if your module only supports fetching data, you will only need the ResultSet delegate.
Their locations are configured in the bundle-script-info.yaml
.
Due to their importance, we suggest placing them in the root directory as shown in the example above.
📄 Initialization Script
Initialization scripts are useful in situations where the module needs certain values for proper functioning. For example, the Azure module's API client requires a token to be attached to requests. The Azure token has a short lifespan and is generated by calling a login endpoint using client credentials. If we include that logic inside the API client, it would add unnecessary overhead and result in the generation of thousands of tokens. Therefore, the initialization script is the right place to handle that logic.
Even though initialization script can be loaded from any place within the module, we do suggest placing it at root level.
When an initialization script is present, the script must inform the Engine of the result of the initialization process, since a failure here will prevent the Engine from starting correctly.
See the context member trough which the Engine receives the information.
Example:
import { generateAzureToken } from "../api/TokenGenerator";
try {
generateAzureToken();
initResult.initialized();
} catch (e) {
initResult.error(e.message);
}
📄 Scheduled Scripts
Scheduled scripts are executed based on a cron expression. If we follow the above mentioned Azure example, we can have a scheduled script that refreshes the token before it expires, making the token generation process completely transparent to the API client and delegates.
Even though it is not recommended to overload the Engine with hundreds of scheduled tasks (if you do, take that into account when assigning resources to the instance, like CPU and Memory), another possible use case is having notifiers that pushes messages to other systems, or even keep some consistency based on the information present in other Data Sources.
Try to keep scheduled scripts as separate as the rest of the code as possible.
For example, the following script gets the current state of Deployments of a Kubernetes cluster named k8s_cluster_1
, and pushes them in slack:
var result = DBEngine.executeQuery("kblops",
`
SELECT dc.clusterName, dc.metadata__name, dc.metadata__namespace, dc.status, dc.lastUpdateTime, dc.lastTransitionTime
FROM kube1.DEPLOYMENT_CONDITIONS AS dc
RIGHT JOIN (
SELECT metadata__name, metadata__namespace, MAX(lastUpdateTime) AS lastUpdateTime, MAX(lastTransitionTime) AS lastTransitionTime
FROM kube1.DEPLOYMENT_CONDITIONS
WHERE metadata__namespace IN (SELECT metadata__name FROM kube1.NAMESPACE WHERE clusterName = 'kube1')
GROUP BY clusterName, metadata__name, metadata__namespace
) AS subquery
ON
dc.metadata__namespace = subquery.metadata__namespace AND
dc.metadata__name = subquery.metadata__name AND
dc.lastUpdateTime = subquery.lastUpdateTime AND
dc.lastTransitionTime = subquery.lastTransitionTime
`);
const rows = result.rows;
let finalMessage = '';
for (let pos = 0; pos < rows.size(); pos++) {
let row = rows.get(pos);
finalMessage += `Cluster: ${row.clustername} | Namespace: ${row.metadata__namespace} | Deploy: ${row.metadata__name} >> Is Active: ${row.status}\n`;
}
let req = {
"url": `https://slack.com/api/chat.postMessage`,
"method": "POST",
"headers": {
"Accept": "*/*",
"Content-Type": "application/x-www-form-urlencoded",
"Accept-Encoding": "gzip",
"Authorization": `Bearer ${contextVars.SLACK_TOKEN}`
},
"formUrlEncoded": true,
"form": {
"channel": "kubling-engineering",
"text": finalMessage
}
}
let resp = httpCli.exec(req);
if (resp.statusCode !== 200) throw new Error(`Response ${resp.statusCode} | Message ${resp.content}`);
In a more complex and certainly interesting scenario, you won't have to manually call Slack's API.
Instead, you will likely have Slack as a Data Source, and the scheduled script will simply perform an INSERT
into a chat table, which will trigger the actual API call.