A set of tools to help users create, test, and build containerized management packs for VMware Aria Operations
Project description
VMware Aria Operations Integration SDK
Welcome to the VMware Aria Operations Integration SDK.
What is the Integration SDK?
The Integration SDK creates Management Packs to add custom objects, data, and relationships from an endpoint into VMware Aria Operations.
Using this SDK to create a Management Pack requires some Python knowledge (more languages are planned), and an understanding of how to get data from the endpoint using an API. For example, to create a Management Pack for Cassandra DB, an understanding of how to write an SQL query, execute it, and read the results is required.
Currently, installing a Management Pack built with the Integration SDK is supported for On-Prem versions of VMware Aria Operations only, but we are working to bring support to VMware Aria Operations Cloud in a future release.
For a high-level overview of VMware Aria Operations, Management Packs, and this SDK, see the introduction.
What can the Integration SDK be used for?
The Integration SDK can be used to add any endpoint that supports remote monitoring to VMware Aria Operations. Adding the endpoint involves creating objects that represent the endpoint, which may include properties, metrics, and events, as well as relationships between objects.
Remote monitoring uses an API (such as REST, SNMP, SQL, etc) to retrieve the data (as opposed to agent-based monitoring, where the monitoring code runs in the same location as the endpoint).
For an example walkthrough of creating a new Management Pack monitoring an endpoint, see Creating a new Management Pack
The Integration SDK can also be used to extend objects created by another Management Pack with additional metrics, properties, events, or relationships. This can be useful to ensure access to custom data without having to re-implement already existing data.
For an example walkthrough of the steps required to extend another management pack, see Extending an Existing Management Pack
Where should I start?
- If you want to get started creating your first Management Pack, or don't know where to start, read the Get Started tutorial.
- If you have completed the Get Started tutorial, the walkthroughs are guides for modifying your adapter.
- All documentation is available from the contents page.
Get Started
This guide will walk through setting up the SDK and using the SDK to create, test, and install a simple Management Pack (integration) in VMware Aria Operations.
Contents
- Requirements
- Installation
- Creating a Management Pack
- Testing a Management Pack
- Building and Installing a Management Pack
Requirements
Operating System:
The VMware Aria Operations Integration SDK has been tested on the following operating systems:
- Windows 10
- Windows 11
- macOS 12 (Monterey)
- macOS 13 (Ventura)
- Debian Linux
- Fedora Linux
Other operating systems may be compatible.
VMware Aria Operations
The Management Packs generated by the VMware Aria Operations Integration SDK will only run on versions that support containerized Management Packs. Currently, this is limited to on-prem installs, version 8.10 or later. In addition, at least one Cloud Proxy (also version 8.10 or later) must be set up in VMware Aria Operations, as containerized Management Packs must be run on a Cloud Proxy collector.
Dependencies
- Docker 20.10.0 or later. Updating to the latest stable version is recommended. For instructions on installing Docker, go to Docker's installation documentation, follow the instructions provided for your operating system.
- Python3 3.9.0 or later. Updating to the latest stable version is recommended. Python 3.8 and earlier (including Python2) are not supported. For instructions on installing Python, go to Python's installation documentation, and follow the instructions provided for your operating system.
- Pipx (recommended) or pip. If Python3 is installed, pip is most likely also installed. For instructions on installing pipx, go to pipx's installation documentation, and follow the instructions provided. For instructions on installing pip, go to pip's installation documentation, and follow the instructions provided.
- Git 2.35.0 or later. Updating to the latest stable version is recommended. For instructions in installing git, go to Git's installation documentation, and follow the instructions provided for your operating system.
Installation
To install the SDK, use pipx
to install into an isolated environment. We recommend this in most cases to avoid dependency conflicts. Alternatively, pip
can be used to install into the global environment, or to install into a manually-managed virtual environment.
pipx install vmware-aria-operations-integration-sdk
Creating a Management Pack
After the SDK is installed, create a new project, by running mp-init
. This tool asks a series of questions that guide
the creation of a new management pack project.
-
Enter a directory to create the project in. This is the directory where adapter code, metadata, and content will reside. If the directory doesn't already exist, it will be created. Path:
The path can be an absolute or relative path. The path should end in an empty or non-existing directory. If the directory does not exist, it will be created. This directory will contain a new Management Pack project.
-
Management Pack display name
The Management Pack display name will show up in VMware Aria Operations (Data Sources → Integrations → Repository), or when adding an account.
This Management Pack's display name is 'TestAdapter', and uses the default icon
-
Management Pack adapter key
This field is used internally to identify the Management Pack and Adapter Kind. By default, it is set to the Management Pack display name with special characters and whitespace stripped from it.
-
Management Pack description
This field should describe what the Management Pack will do or monitor.
-
Management Pack vendor
The vendor field shows up in the UI under 'About' on the Integration Card.
This Management Pack's vendor is 'VMware'
-
Enter a path to a EULA text file, or leave blank for no EULA
VMware Aria Operations requires a EULA file to be present in a Management Pack. If one isn't provided, a stub EULA file (
eula.txt
in the root project directory) will be added to the project which reads:There is no EULA associated with this Management Pack.
-
Enter a path to the Management Pack icon file, or leave blank for no icon
The icon is used in the VMware Aria Operations UI if present. If it is not present, a default icon will be used. The icon file must be PNG format and 256x256 pixels. An icon file can be added later by copying the icon to the root project directory and setting the value of the
"pak_icon"
key to the icon's file name in themanifest.txt
file.
For complete documentation of the mp-init
tool including an overview of its output, see the MP Initialization Tool Documentation.
Template Project
Every new project creates a file system that has the basic project structure required to develop and build a Management Pack.
Each file and directory is discussed in depth in the mp-init documentation. app/adapter.py
is the adapter's
entry point and the best starting point.
adapter.py
is a template adapter that collects several objects and metrics from the
container in which the adapter is running. The template adapter has comments throughout its code that explain what the code does
and how to customize it for your adapter.
The methods inside the adapter template are required, and should be modified to generate a custom
adapter. Each method fulfills a request from the VMware Aria Operations collector, and can be tested individually using
mp-test
(covered in Testing a Management Pack).
The adapter is stateless. This means the adapter cannot store any data for use in later method calls.
Each method is described below:
-
test(adapter_instance): Performs a test connection using the information given to the adapter_instance to verify the adapter instance has been configured properly. A typical test connection will generally consist of:
- Read identifier values from adapter_instance that are required to connect to the target(s)
- Connect to the target(s), and retrieve some sample data
- If any of the above failed, return an error, otherwise pass.
- Disconnect cleanly from the target (ensure this happens even if an error occurs)
-
get_endpoints(adapter_instance): This method is run before the 'test' method, and VMware Aria Operations will use the results to extract a certificate from each URL. If the certificate is not trusted by the VMware Aria Operations Trust Store, the user will be prompted to either accept or reject the certificate. If it is accepted, the certificate will be added to the AdapterInstance object that is passed to the 'test' and 'collect' methods. Any certificate that is encountered in those methods should then be validated against the certificate(s) in the AdapterInstance. This method will only work against HTTPS endpoints, different types of endpoint will not work (e.g., database connections).
-
collect(adapter_instance): Performs a collection against the target host. A typical collection will generally consist of:
- Read identifier values from adapter_instance that are required to connect to the target(s)
- Connect to the target(s), and retrieve data
- Add the data into the CollectResult as objects, properties, metrics, etc
- Disconnect cleanly from the target (ensure this happens even if an error occurs)
- Return the CollectResult.
-
get_adapter_definition(): Optional method that defines the Adapter Instance configuration. The Adapter Instance configuration is the set of parameters and credentials used to connect to the target and configure the adapter. It also defines the object types and attribute types present in a collection. Setting these helps VMware Aria Operations to validate, process, and display the data correctly. If this method is omitted, a
describe.xml
file should be manually created inside theconf
directory with the same data. Generally, this is only necessary when using advanced features of thedescribe.xml
file that are not present in this method.
For further guidance on using the template project, consult the Walkthroughs section.
Testing a Management Pack
In the Management Pack directory, the installation script writes a requirements.txt
file containing the version of the
SDK used to generate the project, and installs the SDK into a virtual environment named venv
. Note that the packages
in requirements.txt
are not installed into the adapter. To add a package to the adapter, specify it in the file
adapter_requirements.txt
.
To use the SDK, navigate to the newly-generated project directory and activate the virtual environment:
For Mac and Linux:
source venv/bin/activate
(This script is written for the bash shell. If you use the csh or fish shells, there are alternate activate.csh and activate.fish scripts you should use instead.) For Windows:
venv\Scripts\activate.bat
Note: To exit the virtual environment, run
deactivate
in the virtual environment.
To test a project, run mp-test
in the virtual environment.
If mp-test
is run from anywhere outside the root project directory, the tool will prompt to choose a project, and will
test the selected project. If the tool is run from a project directory, the tool will automatically test that project.
mp-test
will ask for a connection. No connections should exist, so choose New Connection. The test tool then
reads the conf/describe.xml
file to find the connection parameters and credentials required for a connection, and
prompts for each. This is similar to creating a new Adapter Instance in the VMware Aria Operations UI. Connections are automatically
saved per project, and can be reused when re-running the mp-test
tool.
Note: In the template project, the only connection parameter is
ID
, and because it connects to the container it is running on, this parameter is not necessary; it is only there as an example, and can be set to any value. The template also implements an example Test Connection. If a Test Connection is run (see below), with theID
set to the textbad
, then the Test Connection will fail.
The test tool also asks for the method to test. There are four options:
- Test Connection - This call tests the connection and returns either an error message if the connection failed, or an empty json object if the connection succeeded.
- Collect - This call test the collection, and returns objects, metrics, properties, events, and relationships.
- Endpoint URLs - This returns a list (possibly empty) of URLs that have distinct SSL certificates that VMware Aria Operations can ask the end user to import into the TrustStore.
- Version - This returns the VMware Aria Operations Collector API version the adapter implements. The implementation of this method is not generally handled by the developer.
For more information on these endpoints, see the Swagger API documentation. Each response is validated against the API.
For complete documentation of the mp-test
tool see the MP Test Tool Documentation.
Building and Installing a Management Pack
To build a project, run mp-build
in the virtual environment.
If mp-build
is run from anywhere outside the root project directory, the tool will prompt to choose a project, and will
build the selected project. If the tool is run from a project directory, the tool will automatically build that
project.
Once the project is selected (if necessary), the tool will build the management pack and emit a pak
file which can be
installed on VMware Aria Operations. The pak
file will be located in <project root>/build/
.
To install the pak
file, in VMware Aria Operations navigate to Data Sources → Integrations →
Repository and click ADD
. Select and upload the generated pak
file, accept the README, and install the management pack.
To configure the management pack, VMware Aria Operations navigate to Data Sources → Integrations →
Accounts and click ADD ACCOUNT
. Select the newly-installed management pack and configure the required fields. For
Collector/Group
, make sure that a cloud proxy collector is selected. Click VALIDATE CONNECTION
to test the connection.
It should return successfully, then click ADD
.
By default, a collection will run every 5 minutes. The first collection should happen immediately. However, newly-created objects cannot have metrics, properties, and events added to them. After the second collection, approximately five minutes later, the objects' metrics, properties, and events should appear. These can be checked by navigating to ** Environment → Object Browser → All Objects** and expanding the Adapter and associated object types and object.
The CPU object's idle-time
metric in a Management Pack named QAAdapterName
.
For complete documentation of the mp-build
tool see the MP Build Tool Documentation.
Walkthroughs
Creating a New Management Pack
This guide assumes you have already set up the SDK and know how to create a new project. It walks you through the steps necessary to monitor an endpoint, using Alibaba Cloud as an example.
This section will create a simple management pack that creates objects with metrics, properties, and relationships that monitors Alibaba Cloud. It assumes you have already installed the SDK and understand the tools and steps in the 'Get Started' section. It also assumes you have an Alibaba Cloud account.
For the purposes of this walkthrough, we will be adding an ECS Instance object with
six properties, and a relationship to the Adapter Instance. All the data can be found
by calling the DescribeInstancesRequest
method in the ECS Python Library.
The first step is to run mp-init
and create a new project. There are no restrictions,
except that the adapter kind key cannot be used by another management pack that is
installed on the same system. For example, we used the following to create the sample:
❯ mp-init
Enter a directory to create the project in. This is the directory where adapter code, metadata, and
content will reside. If the directory doesn't already exist, it will be created.
Path: alibaba-cloud-mp
Management pack display name: Alibaba Cloud
Management pack adapter key: AlibabaCloud
Management pack description: Sample Management Pack that monitors Alibaba Cloud
Management pack vendor: VMware, Inc
Enter a path to a EULA text file, or leave blank for no EULA:
Enter a path to the management pack icon file, or leave blank for no icon:
An icon can be added later by setting the 'pak_icon' key in 'manifest.txt' to the
icon file name and adding the icon file to the root project directory.
Creating Project [Finished]
project generation completed
The completed management pack is found in the 'samples' directory, and can be used as a reference for this walkthrough or as a starting point for creating your own.
Once the project finished generating, we can change directory into the project and activate the Python virtual environment.
Next, we need to modify the adapter code. We will break this up into several steps:
- Add a library for connecting to Alibaba
- Modify the adapter definition to add fields for connecting to Alibaba Cloud
- Modify the
test
method to create an Alibaba Cloud connection and run a query - Modify the
collect
method to collect objects, metrics, properties, and relationships - Verify the Alibaba Cloud MP
Add a library for connection to Alibaba Cloud
In order to add the metrics we want, we will need to be able to send requests to Alibaba
Cloud. We could use any HTTP Rest library, such as requests
, but it's usually easier
to use a library designed for the specific service we are monitoring. Thus, for this
sample we will use the official Alibaba Cloud SDK:
aliyun-python-sdk-core
.
Since we will be monitoring ECS instances, we will also need
aliyun-python-sdk-ecs
.
To add a library to the adapter, open the file adapter_requirements.txt
and add a new
line with the name of the library. Optionally, we can also add a version constraint.
Here's what the modified file should look like:
vmware-aria-operations-integration-sdk-lib==0.7.*
psutil
aliyun-python-sdk-core==2.13.36
aliyun-python-sdk-ecs==4.24.61
Note: We can also remove the
psutil
library, as that is only used in the sample code that we will be replacing. However, we would then no longer be able to runmp-test
until we have removed the sample code that depends onpsutil
, so for now we will keep it.
Modify the adapter definition to add fields for connecting to Alibaba Cloud
Now that we have added the library, we need to see what information it needs in order to connect. From the documentation, the client requires:
- Access Key ID
- Region ID
- Access Secret
In the app/adapter.py
file, find the get_adapter_definition()
method. We will define
parameters for the Access Key ID
and Region ID
, and a credential for the
Access Key Secret
. We could put the Access Key ID
in the credential, however
credentials are not used to identify adapter instances. If Region ID
was the only
required parameter, then we would only be able to make one Adapter Instance per region.
Using the Access Key ID
as an additional identifier will allow us to monitor multiple
accounts with the same Region ID
.
After also removing the 'ID' parameter used by the sample adapter, the method could look similar to this:
def get_adapter_definition() -> AdapterDefinition:
definition = AdapterDefinition(ADAPTER_KIND, ADAPTER_NAME)
definition.define_string_parameter(
"access_key_id",
label="Access Key ID",
description="The AccessKey ID of the RAM account",
required=True,
)
definition.define_enum_parameter(
"region_id",
label="Region ID",
values=[
"cn-hangzhou",
"cn-beijing",
"cn-zhagjiakou",
"cn-shanghai",
"cn-qingdao",
"cn-huhehaote",
"cn-shenzhen",
"cn-chengdu",
"cn-hongkong",
"ap-northeast-1",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ap-southeast-3",
"ap-southeast-5",
"eu-central-1",
"eu-west-1",
"me-east-1",
"us-east-1",
"us-west-1"
],
description="Set the region to collect from. Only one region can be "
"selected per Adapter Instance.",
required=True,
)
ram_account = definition.define_credential_type("RAM Account")
ram_account.define_password_parameter(
"access_key_secret",
"AccessKey Secret",
required=True,
)
# The key 'container_memory_limit' is a special key that is read by the VMware Aria Operations collector to
# determine how much memory to allocate to the docker container running this adapter. It does not
# need to be read inside the adapter code.
definition.define_int_parameter(
"container_memory_limit",
label="Adapter Memory Limit (MB)",
description="Sets the maximum amount of memory VMware Aria Operations can "
"allocate to the container running this adapter instance.",
required=True,
advanced=True,
default=1024,
)
Now that we've defined the connection parameters, we should define the objects that we will collect. For now, let's collect some information about ECS Instances. This is a small example. The implementation in the samples directory includes ECS Metrics and an additional Security Group object type.
ecs_instance = definition.define_object_type("ecs_instance", "ECS Instance")
ecs_instance.define_string_identifier("instance_id", "Instance ID")
ecs_instance.define_string_identifier("region_id", "Region ID")
ecs_instance.define_numeric_property("cpu", "CPU Count")
ecs_instance.define_numeric_property("memory", "Total Memory", unit=Units.DATA_SIZE.MEBIBYTE)
ecs_instance.define_string_property("status", "Status")
ecs_instance.define_string_property("instance_type", "Instance Type")
ecs_instance.define_string_property("private_ip", "Private IP Addresses")
ecs_instance.define_string_property("public_ip", "Public IP Addresses")
Modify the test
method to create an Alibaba Cloud connection and run a query
We can try to connect and run a test query. We will do this in the test
method. Notice
this takes an AdapterInstance
as a parameter. We will replace all the code that is
inside the try block.
All the parameters and credentials from the definition will be present in this Adapter
Instance. We can access them like this, using the key
s that we defined in the
get_adapter_definition
function to get the value assigned to that parameter:
access_key = adapter_instance.get_identifier_value("access_key_id")
region = adapter_instance.get_identifier_value("region_id")
secret = adapter_instance.get_credential_value("access_key_secret")
We can then use them to connect to Alibaba Cloud and run a test query. First import the require modules:
from aliyunsdkcore.client import AcsClient
from aliyunsdkecs.request.v20140526 import DescribeInstancesRequest
Then using the identifier values from above, create a client and initiate a request:
# Create and initialize a AcsClient instance
client = AcsClient(
access_key,
secret,
region,
)
request = DescribeInstancesRequest.DescribeInstancesRequest()
request.set_accept_format('json')
response = client.do_action_with_exception(request)
logger.info(str(response, encoding='utf-8'))
return result
Since we can expect that this will sometimes fail, e.g., if the user provides the wrong Access Key or Secret, we should ensure there is good error-handling in this function.
If we detect a failure (e.g., in the except
block), we should call
result.with_error(error_message)
to indicate the test has failed. If no errors have
been attached to the result
object, the test will pass. (Note that calling
result.with_error(...)
multiple times in the same test will result in only the last
error being displayed.)
If the management pack will be widely distributed, it may also be worthwhile to catch common errors and ensure the resulting messages are clear and user-friendly.
We should now be able to run mp-test connect
to run this code. The mp-test
tool
will ask you to create a new connection, prompting for 'Access Key ID', 'Region ID', and
'Access Key Secret'. After, it will ask if it should override SuiteAPI1
credentials. We will not need them for this sample, so we can select 'No'.
1SuiteAPI is a REST API on VMware Aria Operations that can be used for many purposes. The documentation for this API can be found on any VMware Aria Operations instance at https://[aria_ops_hostname]/suite-api/. The 'adapter_instance' object that is passed to the 'test', 'get_endpoints', and 'collect' methods can automatically connect to this API and has methods for querying it.
If everything was successful, the result should look similar to this:
(venv-Alibaba Cloud) ❯ mp-test connect
Choose a connection: default
Building adapter [Finished]
Waiting for adapter to start [Finished]
Running Connect [Finished]
{}
Avg CPU % | Avg Memory Usage % | Memory Limit | Network I/O | Block I/O
------------------------------+----------------------------+--------------+---------------------+--------------
29.6 % (0.0% / 29.6% / 59.1%) | 4.0 % (4.0% / 4.0% / 4.0%) | 1.0 GiB | 5.52 KiB / 8.76 KiB | 0.0 B / 0.0 B
Request completed in 1.24 seconds.
All validation logs written to '~/Code/alibaba-cloud-mp/logs/validation.log'
Validation passed with no errors
Modify the collect
method to collect objects, metrics, properties, and relationships
Now that the test
method is working, we can implement the collect
method. This is
the method where we query Alibaba Cloud for the objects, metrics, etc, we want and send
them to VMware Aria Operations.
First, we should remove all the sample code inside the try
block. All the code for the
following steps should be inside the try
block.
Then, we need to establish a connection to Alibaba Cloud. We can do this in the same way
as in test connect. In many cases creating a function for connecting that is called from
both test
and collect
is worthwhile.
Next, we'll run several queries to get the data from Alibaba Cloud that we want, add
the objects to the result
, add data to the objects, and return the result. This
collects all the properties in the small definition above. The implementation in the
samples directory has more data.
request = DescribeInstancesRequest.DescribeInstancesRequest()
request.set_accept_format('json')
response = client.do_action_with_exception(request)
json_response = json.loads(response)
# Add the adapter instance so that we can make a relationship to it from the
# ECS instances
result.add_object(adapter_instance)
for instance in json_response.get("Instances", {}).get("Instance", []):
id = instance.get("InstanceId")
if not id:
continue
name = instance.get("HostName", id)
ecs_object = result.object(ADAPTER_KIND, "ecs_instance", name,
identifiers=[Identifier("instance_id", id),
Identifier("region_id", region)])
ecs_object.add_parent(adapter_instance)
ecs_object.with_property("cpu", instance.get("Cpu"))
ecs_object.with_property("memory", instance.get("Memory"))
ecs_object.with_property("status", instance.get("Status"))
ecs_object.with_property("instance_type", instance.get("InstanceType"))
ecs_object.with_property("private_ip", str(instance.get("VpcAttributes", {}).get("PrivateIpAddress", {}).get("IpAddress", [])))
ecs_object.with_property("public_ip", str(instance.get("PublicIpAddress", {}).get("IpAddress", [])))
Verify the Alibaba Cloud MP
To verify the MP, run mp-test collect
using the same connection we created earlier. We
should see all ECS Instances that are present in the selected region that the RAM user
associated with the access key has permission to view, with a small number of properties
attached to it. In addition, each ECS Instance should be a child of the Adapter
Instance. For example, with a very small environment with a single ECS Instance, we may
see a result similar to this:
(venv-Alibaba Cloud) ❯ mp-test -c default collect
Building adapter [Finished]
Waiting for adapter to start [Finished]
Running Collect [Finished]
{
"nonExistingObjects": [],
"relationships": [],
"result": [
{
"events": [],
"key": {
"adapterKind": "AlibabaCloud",
"identifiers": [
{
"isPartOfUniqueness": true,
"key": "access_key_id",
"value": "LTAI5tJAcgHHoDT9d4xWNQBu"
},
{
"isPartOfUniqueness": false,
"key": "container_memory_limit",
"value": "1024"
},
{
"isPartOfUniqueness": true,
"key": "region_id",
"value": "us-east-1"
}
],
"name": "default",
"objectKind": "AlibabaCloud_adapter_instance"
},
"metrics": [],
"properties": []
},
{
"events": [],
"key": {
"adapterKind": "AlibabaCloud",
"identifiers": [
{
"isPartOfUniqueness": true,
"key": "instance_id",
"value": "i-0xi23s0o5pgnbdir3e3j"
},
{
"isPartOfUniqueness": true,
"key": "region_id",
"value": "us-east-1"
}
],
"name": "iZ0xi23s0o5pgnbdir3e3jZ",
"objectKind": "ecs_instance"
},
"metrics": [],
"properties": [
{
"key": "cpu",
"numberValue": 1.0,
"timestamp": 1681933134430
},
{
"key": "memory",
"numberValue": 1024.0,
"timestamp": 1681933134430
},
{
"key": "status",
"stringValue": "Running",
"timestamp": 1681933134430
},
{
"key": "instance_type",
"stringValue": "ecs.n1.tiny",
"timestamp": 1681933134430
},
{
"key": "private_ip",
"stringValue": "['172.29.43.26']",
"timestamp": 1681933134430
},
{
"key": "public_ip",
"stringValue": "['47.90.216.22']",
"timestamp": 1681933134430
}
]
}
]
}
Collection summary:
Table cell format is: 'total (min/median/max)'
Object Type | Count | Metrics | Properties | Events | Parents | Children
--------------------------------------------+-------+---------+------------+--------+---------+---------
AlibabaCloud::AlibabaCloud_adapter_instance | 1 | 0 | 0 | 0 | 0 | 0
AlibabaCloud::ecs_instance | 1 | 0 | 6 | 0 | 0 | 0
Parent Type | Child Type | Count
------------+------------+------
Avg CPU % | Avg Memory Usage % | Memory Limit | Network I/O | Block I/O
------------------------------+----------------------------+--------------+----------------------+--------------
34.6 % (0.0% / 34.6% / 69.1%) | 4.0 % (4.0% / 4.0% / 4.0%) | 1.0 GiB | 5.52 KiB / 10.21 KiB | 0.0 B / 0.0 B
Collection completed in 0.96 seconds.
All validation logs written to '~/Code/alibaba-cloud-mp/logs/validation.log'
Validation passed with no errors
When everything is working as expected locally using mp-test
, we can run
mp-build
and install on VMware Aria Operations for a final verification.
Next Steps
Extending an Existing Management Pack
This guide assumes you have already set up the SDK and know how to create a new project. It walks you through the steps necessary to extend an existing Management Pack to add additional data, using the MySQL Management Pack as an example.
Extending an existing management pack is similar to creating a new management pack, but has some additional constraints. This section will create a management pack that adds metrics to the existing MySQL management pack's database object. It assumes you have already installed the SDK and understand the tools and steps in the 'Get Started' section. It also assumes that you have installed and configured the MySQL management pack on a VMware Aria Operations instance in your local network.
For the purposes of this walkthrough, we will be adding five metrics to the MySQL database
object that show the total amount of lock waits and statistics about the time spent
waiting for those locks. This info can be found in MySQL in the table
performance_schema.table_lock_waits_summary_by_table
.
The first step is to run mp-init
and create a new project. There are no restrictions,
except that the adapter kind key cannot be used by another management pack that is
installed on the same system. For example, we used the following to create the sample:
❯ mp-init
Enter a directory to create the project in. This is the directory where adapter code, metadata, and
content will reside. If the directory doesn't already exist, it will be created.
Path: mysql-extension-mp
Management pack display name: Extended MySQL MP
Management pack adapter key: ExtendedMySQLMP
Management pack description: Adds 'Lock Wait' metrics to MySQL Database objects
Management pack vendor: VMware, Inc
Enter a path to a EULA text file, or leave blank for no EULA:
Enter a path to the management pack icon file, or leave blank for no icon:
An icon can be added later by setting the 'pak_icon' key in 'manifest.txt' to the
icon file name and adding the icon file to the root project directory.
Creating Project [Finished]
project generation completed
The completed management pack is found in the 'samples' directory, and can be used as a reference for this walkthrough or as a starting point for creating your own.
Once the project finished generating, we can change directory into the project and activate the Python virtual environment.
Next, we need to modify the adapter code. We will break this up into several steps:
- Add a library for connecting to MySQL
- Modify the adapter definition to add fields for connecting to MySQL
- Modify the
test
method to create a MySQL connection and run a query - Modify the
collect
method to collect metrics, and attach them to the correct database objects - Verify the MP
Add a library for connection to MySQL
In order to add the metrics we want, we will need to be able to run a query against a
MySQL database. There are several Python libraries that can help us do this. For now,
let's use mysql-connector-python
.
To add a library to the adapter, open the file adapter_requirements.txt
and add a new
line with the name of the library. Optionally, we can also add a version constraint.
Here's what the modified file should look like:
vmware-aria-operations-integration-sdk-lib==0.7.*
psutil
mysql-connector-python>=8.0.32
Note: We can also remove the
psutil
library, as that is only used in the sample code that we will be replacing. However, we would then no longer be able to runmp-test
until we have removed the sample code that depends onpsutil
, so for now we will keep it.
Modify the adapter definition to add fields for connecting to MySQL
Now that we have added the library, we need to see what information it needs in order to connect. Since the adapter will be running on the VMware Aria Operations Cloud Proxy, which is not where our MySQL instance is running, we will need the following:
- Host
- Port
- Username
- Password
In the app/adapter.py
file, find the get_adapter_definition()
method. We will define
parameters for the Host
and Port
, and a credential for the Username
and Password
.
After also removing the 'ID' parameter from the sample adapter, the method should look
similar to this:
def get_adapter_definition() -> AdapterDefinition:
logger.info("Starting 'Get Adapter Definition'")
definition = AdapterDefinition(ADAPTER_KIND, ADAPTER_NAME)
definition.define_string_parameter("host", "MySQL Host")
definition.define_int_parameter("port", "Port", default=3306)
credential = definition.define_credential_type("mysql_user", "MySQL User")
credential.define_string_parameter("username", "Username")
credential.define_password_parameter("password", "Password")
# The key 'container_memory_limit' is a special key that is read by the VMware Aria
# Operations collector to determine how much memory to allocate to the docker
# container running this adapter. It does not need to be read inside the adapter
# code.
definition.define_int_parameter(
"container_memory_limit",
label="Adapter Memory Limit (MB)",
description="Sets the maximum amount of memory VMware Aria Operations can "
"allocate to the container running this adapter instance.",
required=True,
advanced=True,
default=1024,
)
# This Adapter has no object types directly, rather it co-opts object types that
# are part of the MySQL MP to add additional metrics. As such, we can't define
# those object types here, because they're already defined in the MySQL MP with a
# different adapter type.
# If we decide to also create new objects (that are not part of an existing MP),
# those can be added here.
logger.info("Finished 'Get Adapter Definition'")
logger.debug(f"Returning adapter definition: {definition.to_json()}")
return definition
The adapter definition is also where objects and metrics are defined, however we are only allowed to define objects and metrics that are a part of our adapter type. Because extensions modify objects that are part of a different adapter type, we can't add them. This means that we can't set metric metadata like 'units', 'labels', etc that we would generally be able to set.
Modify the test
method to create a MySQL connection and run a query
Now that we've defined our parameters, we can try to connect and run a test query.
We will do this in the test
method. Notice this takes an AdapterInstance
as a
parameter. We will replace all the code that is inside the try/except block.
All the parameters and credentials from the definition will be present in this Adapter
Instance. We can access them like this, using the key
s that we defined in the
get_adapter_definition
function to get the value assigned to that parameter:
hostname = adapter_instance.get_identifier_value("host")
port = int(adapter_instance.get_identifier_value("port", "3306"))
username = adapter_instance.get_credential_value("username")
password = adapter_instance.get_credential_value("password")
We can then use them to connect to MySQL and run a test query (be sure to import
mysql.connector
):
connection = mysql.connector.connect(
host=hostname,
port=port,
user=username,
password=password,
)
cursor = connection.cursor()
# Run a simple test query
cursor.execute("SHOW databases")
for database in cursor: # The cursor needs to be consumed before it is closed
logger.info(f"Found database '{database}'")
cursor.close()
Since we can expect that this will fail, e.g., if the user provides the wrong username and password, we should ensure there is good error-handling in this function.
If we detect a failure (e.g., in the except
block), we should call
result.with_error(error_message)
to indicate the test has failed. If no errors have
been attached to the result
object, the test will pass. (Note that calling
result.with_error(...)
multiple times in the same test will result in only the last
error being displayed.)
If the management pack will be widely distributed, it may also be worthwhile to catch common errors and ensure the resulting messages are clear and user-friendly.
We should now be able to run mp-test connect
to run this code. The mp-test
tool
will ask you to create a new connection, prompting for 'host', 'port', 'username', and
'password'. After, it will ask if it should override SuiteAPI1 credentials. Unless you
have already set these up, select 'Yes', as we will need them later when we modify the
'collect' method. It will ask you for the SuiteAPI hostname, which should be the
hostname of the VMware Aria Operations instance where the MySQL management pack is
running, and a username and password which have permission to access to the SuiteAPI on
that system.
1SuiteAPI is a REST API on VMware Aria Operations that can be used for many purposes. The documentation for this API can be found on any VMware Aria Operations instance at https://[aria_ops_hostname]/suite-api/. The 'adapter_instance' object that is passed to the 'test', 'get_endpoints', and 'collect' methods can automatically connect to this API and has methods for querying it.
If everything was successful, the result should look similar to this:
(venv-Extended MySQL MP) ❯ mp-test connect
Choose a connection: New Connection
Building adapter [Finished]
Waiting for adapter to start [Finished]
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│Connections are akin to Adapter Instances in VMware Aria Operations, and contain the parameters │
│needed to connect to a target environment. As such, the following connection parameters and credential fields are │
│derived from the 'conf/describe.xml' file and are specific to each Management Pack. │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
Enter connection parameter 'MySQL Host': mysql8-1.localnet
Enter connection parameter 'Port': 3306
Enter connection parameter 'Adapter Memory Limit (MB)': 1024
Enter credential field 'Username': root
Enter credential field 'Password': *********
Override default SuiteAPI connection information for SuiteAPI calls? Yes
Suite API Hostname: aria-ops-1.vmware.com
Suite API User Name: admin
Suite API Password: ********
Set these as the default SuiteAPI connection? Yes
Enter a name for this connection: default
Saved connection 'default' in '~/Code/extended-mysql-mp/config.json'.
The connection can be modified by manually editing this file.
Building adapter [Finished]
Waiting for adapter to start [Finished]
Running Endpoint URLs [Finished]
Running Connect [Finished]
{}
Avg CPU % | Avg Memory Usage % | Memory Limit | Network I/O | Block I/O
------------------------------+----------------------------+--------------+---------------------+--------------
14.9 % (0.0% / 14.9% / 29.8%) | 4.0 % (4.0% / 4.0% / 4.0%) | 1.0 GiB | 9.06 KiB / 4.16 KiB | 0.0 B / 0.0 B
Request completed in 1.85 seconds.
All validation logs written to '~/Code/mysql-extention-mp/logs/validation.log'
Validation passed with no errors
Modify the collect
method to collect metrics, and attach them to the correct database objects
Now that the test
method is working, we can implement the collect
method. This is
the method where we query MySQL for the metrics we want and send them to VMware Aria
Operations as part of the database objects. Before we begin writing code, we need to
look up some information about the MySQL management pack. Specifically, we need the
following information:
- The MySQL Adapter Kind Key
- The MySQL Database Object type
- A way to create a database object that matches a database that already exists on VMware Aria Operations (usually the identifier list, but the name can sometimes work, as in this case).
These will be used to ensure that the metrics are attached to existing MySQL objects, rather than creating new ones.
To get this information, we will ssh
into the collector where the MySQL management
pack is running. Then cd
to $ALIVE_BASE/user/plugin/inbound/mysql_adapter3/conf/
.
From there, open the describe.xml
file. The Adapter Kind key is at the top on the
fourth line:
<?xml version = '1.0' encoding = 'UTF-8'?>
<!-- <!DOCTYPE AdapterKind SYSTEM "describeSchema.xsd"> -->
<!-- Copyright (c) 2020 VMware Inc. All Rights Reserved. -->
<AdapterKind key="MySQLAdapter" nameKey="1" version="1" xmlns="http://schemas.vmware.com/vcops/schema">
Inside the AdapterKind
tag are ResourceKinds/ResourceKind
tags, and we can search
for the one that represents the database resource kind. Once we have found it we can see
that it has two identifiers, one for the adapter instance ID, and one for the database
name.
<ResourceKinds>
<!-- ... -->
<ResourceKind key="mysql_database" nameKey="64" >
<ResourceIdentifier dispOrder="1" key="adapter_instance_id" length="" nameKey="37" required="true" type="string" identType="1" enum="false" default=""> </ResourceIdentifier>
<ResourceIdentifier dispOrder="2" key="database_name" length="" nameKey="65" required="true" type="string" identType="1" enum="false" default=""> </ResourceIdentifier>
In order to attach a metric to these objects, we will need all identifiers that have an
identType=1
. In this case, those are adapter_instance_id
and database_name
. This
means that the combination of those two fields uniquely identify the object among all
of the mysql_database
objects in the MySQLAdapter
adapter.
Getting the adapter_instance_id
requires a SuiteAPI call. We need to retrieve the
Adapter Instances for MySQLAdapter
that has the same host and port identifiers as our
adapter, and then retrieving the id. However, if we look in VMware Aria Operations
itself, we can see that each database's name has the format mysql_host/mysql_database
,
which should be unique (even if VMware Aria Operations isn't using it for determining
uniqueness). Thus, a simpler way to get matching objects (in this case) is to construct
the name, and ask the SuiteAPI to give us all MySQLAdapter
mysql_database
objects
with those names. Then we can simply attach metrics to the resulting mysql_database
objects, which will have all identifiers correctly set by the SuiteAPI.
First, we should remove all the sample code inside the try
block. All the code for the
following steps should be inside the try
block.
Then, we need to establish a connection to MySQL. We can do this in the same way as in
test connect. In many cases creating a function for connecting that is called from both
test
and collect
is worthwhile. Then we can query the list of databases, and
construct a list of database names that may be present:
# Get the list of databases on this instance
cursor = connection.cursor()
cursor.execute("SHOW databases")
database_names = [f"{hostname}/{database[0]}" for database in cursor]
cursor.close()
We then query the SuiteAPI for mysql_database
objects from the MySQLAdapter
adapter, with the names we computed. The queries that query_for_resources
accepts
are documented in the SuiteAPI documentation, and can search on many types of metadata
about a resource. After that, we add the returned objects to the result
and to a
dictionary for quick access later.
# Get the list of objects from the SuiteAPI that represent the MySQL
# databases that are on this instance, and add any we find to the result
databases = {} # dict of database Objects by name for easy access
with adapter_instance.suite_api_client as suite_api:
dbs = suite_api.query_for_resources(
query={
"adapterKind": ["MySQLAdapter"],
"resourceKind": ["mysql_database"],
"name": database_names,
},
)
for db in dbs:
databases[db.get_identifier_value("database_name")] = db
# Add each database to the collection result. Objects must be
# added to the result in order for them to be returned by the
# collect method.
result.add_object(db)
Finally, we'll run the query to get the data from MySQL that we want, and add that data as metrics to the relevant databases, and return the result:
# Run a query to get some additional data. Here we're getting info about
# lock waits on each database
cursor = connection.cursor()
cursor.execute("""
select OBJECT_SCHEMA,
sum(COUNT_STAR) as COUNT_STAR,
sum(SUM_TIMER_WAIT) as SUM_TIMER_WAIT,
max(MAX_TIMER_WAIT) as MAX_TIMER_WAIT,
min(MIN_TIMER_WAIT) as MIN_TIMER_WAIT
from performance_schema.table_lock_waits_summary_by_table
group by OBJECT_SCHEMA
""")
# Iterate through the results of the query, and add them to the appropriate
# database Object as metrics.
for row in cursor:
if len(row) != 5:
logger.error(f"Row is not expected size: {repr(row)}")
continue
database = databases.get(row[0])
if not database:
logger.info(f"Database {row[0]} not found in Aria Operations")
continue
database.with_metric("Table Locks|Count", float(row[1]))
database.with_metric("Table Locks|Sum", float(row[2]))
database.with_metric("Table Locks|Max", float(row[3]))
if float(row[1] > 0):
database.with_metric("Table Locks|Avg", float(row[2])/float(row[1]))
else:
database.with_metric("Table Locks|Avg", 0)
database.with_metric("Table Locks|Min", float(row[4]))
cursor.close()
return result
Verify the MP
To verify the MP, run mp-test
using the same connection we created earlier. If there
are any mysql_database
objects that have entries in the
table_lock_waits_summary_by_table
table, we should see those returned in the
collection result. For example, if the MySQL management pack is configured to collect
loadgen
, mysql
, and sys
, and the data query returns:
object_schema | count_star | sum_timer_wait | max_timer_wait | min_timer_wait
-------------------+------------+----------------+----------------+---------------
mysql | 0 | 0 |0 | 0
performance_schema | 0 | 0 |0 | 0
sys | 2 | 3946368 |2255204 | 1691164
Then we would expect to see entries for each database monitored by MySQL, but new data should be present only for the subset that was also returned by the data query:
{
"nonExistingObjects": [],
"relationships": [],
"result": [
{
"events": [],
"key": {
"adapterKind": "MySQLAdapter",
"identifiers": [
{
"isPartOfUniqueness": true,
"key": "adapter_instance_id",
"value": "347062"
},
{
"isPartOfUniqueness": true,
"key": "database_name",
"value": "loadgen"
}
],
"name": "mysql8-1.localnet/loadgen",
"objectKind": "mysql_database"
},
"metrics": [],
"properties": []
},
{
"events": [],
"key": {
"adapterKind": "MySQLAdapter",
"identifiers": [
{
"isPartOfUniqueness": true,
"key": "adapter_instance_id",
"value": "347062"
},
{
"isPartOfUniqueness": true,
"key": "database_name",
"value": "mysql"
}
],
"name": "mysql8-1.localnet/mysql",
"objectKind": "mysql_database"
},
"metrics": [
{
"key": "Table Locks|Count",
"numberValue": 0.0,
"timestamp": 1681767040181
},
{
"key": "Table Locks|Sum",
"numberValue": 0.0,
"timestamp": 1681767040181
},
{
"key": "Table Locks|Max",
"numberValue": 0.0,
"timestamp": 1681767040181
},
{
"key": "Table Locks|Avg",
"numberValue": 0.0,
"timestamp": 1681767040181
},
{
"key": "Table Locks|Min",
"numberValue": 0.0,
"timestamp": 1681767040181
}
],
"properties": []
},
{
"events": [],
"key": {
"adapterKind": "MySQLAdapter",
"identifiers": [
{
"isPartOfUniqueness": true,
"key": "adapter_instance_id",
"value": "347062"
},
{
"isPartOfUniqueness": true,
"key": "database_name",
"value": "sys"
}
],
"name": "mysql8-1.localnet/sys",
"objectKind": "mysql_database"
},
"metrics": [
{
"key": "Table Locks|Count",
"numberValue": 2.0,
"timestamp": 1681767040182
},
{
"key": "Table Locks|Sum",
"numberValue": 3946368.0,
"timestamp": 1681767040182
},
{
"key": "Table Locks|Max",
"numberValue": 2255204.0,
"timestamp": 1681767040182
},
{
"key": "Table Locks|Avg",
"numberValue": 1973184.0,
"timestamp": 1681767040182
},
{
"key": "Table Locks|Min",
"numberValue": 1691164.0,
"timestamp": 1681767040182
}
],
"properties": []
}
]
}
When everything is working as expected locally using mp-test
, we can run
mp-build
and install on VMware Aria Operations for a final verification.
Next Steps
Troubleshooting
When starting Docker, I get 'Permission denied while trying to connect to the Docker daemon'
If you're having trouble getting Docker to run on your system, you can refer to the Docker documentation for instructions on how to start Docker on macOS, Linux, and Windows 10 and 11.
When starting Docker on Windows, I get 'Cannot connect to Docker daemon'
If you're having trouble with permissions on a Windows system, you can refer to the Docker documentation for instructions on how to Understand permission requirements for Windows.
How can I set up an AWS container registry for my project?
AWS container registries use aws
CLI to authenticate, so users should authenticate to their AWS container registry and create a repository before
running mp-build
.
- Log in to your registry using aws CLI
- Create a repository
- Run
mp-build
and use the registry tag when prompted about it (usually looks likeaws_account_id.dkr.ecr.region.amazonaws.com/hello-repository
)
How can I set up a Docker Hub container registry for my project?
Docker CLI recommends using a token when using docker hub instead of your login password, so users should authenticate their Docker Hub account before running mp-build
.
- Generate a dockerhub token.
- Open the
config.json
file in the project's root directory, then replace the key-value ofdocker_registry
with the tag of the Docker Hub repository prepended withdocker.io
. For example, if the docker tag isusername/docker-registry-test:tagname
then the key-value will bedocker.io/username/docker-registry-test
.
VMware Aria Operations only supports anonymous pulling of images, which may cause issues when using Docker Hub since there is a Donwload rate limit.
How can I set up a Management Pack that uses a private container registry?
VMware Aria Operations only supports anonymous pulling of images, however, cloud proxies lookup images locally before attempting to pull.
- ssh into the cloud proxy where the adapter is going to be set up
- pull the same image used by the management pack (usually using the docker CLI inside the adapter)
- Install Management Pack in VMware Aria operations
How can I change the container registry for my project?
Open the config.json
file located in the project's root directory, then replace the key-value for docker_registry
with the tag of the
repository you want to use. The next time mp-build
is run, the new tag will be used and validated.
Where are the adapter logs stored locally?
Logs generated by mp-test
or mp-build
are stored in the logs
sub-directory of the
project.
Where are the adapter logs stored in VMware Aria Operations?
Logs are generated and stored on the cloud proxy where the adapter is running at $ALIVE_BASE/user/log/adapter/<ADAPTERKEY>_adapter3/<ADAPTER_INTERNAL_INSTANCE_ID>
.
ADAPTERKEY
should match the adapter key used in the manifest.txt
, and the ADAPTER_INTERNAL_INSTANCE_ID
should match the Internal ID
found in VMware Aria Operations at Environment → Inventory → Adapter Instances → <ADAPTER_DISPLAY_NAME> → <ADAPTER_INSTANCE> in the rightmost column.
The Internal ID
column is not displayed by default. To display the Internal ID, enable the Internal ID
column by clicking the lower left 'column' icon and then checking the Internal ID
box.
What are the different log files used for?
There are five types of log files: adapter, server, build, test, and validation logs. Each log file is prepended with the type of log file followed by a number that represents rollover.
-
server.log
: Contains all logs related to the HTTP server inside the container. Server logs can't be modified since the server code comes packaged inside the base-adapter Python image. -
adapter.log
Contains all logs related to the adapter. Adapter logs are all the logs generated by adapter code (e.g., the test() method or the collect() methods insideapp/adapter.py
). -
test.log
Contains all logs related tomp-test
. -
build.log
Contains all logs related tomp-build
. -
validation.log
Contains a log of the validations performed bymp-test
on the collection results. Validation logs are only generated locally.
How do I add logs to my adapter?
The template adapter defines a logger variable in the adapter.py
file that configures all adapter logging using adapter_logging from the Python SDK.
The logger only needs to be configured once; to generate logs in other files, simply import the Python logging module. Eg.
import logging
logger = logging.getLogger(__name__)
def my_method():
logger.info("info log")
logger.warning("warning log")
logger.error("error log")
logger.debug("debug log")
...
How do I change the server and/or adapter log level?
You can set the log levels for the server and adapter inside the loglevels.cfg
file, which is located in logs/loglevels.cfg
locally and on the cloud proxy at $ALIVE_BASE/user/log/adapters/<ADAPTERKEY>_adapter3/<ADAPTER_INTERNAL_INSTANCE_ID>/loglevels.cfg
.
If the file does not exist, the system generates it after a collection/test.
ADAPTERKEY
should match the name of the adapter used in the manifest.txt
, and the ADAPTER_INTERNAL_INSTANCE_ID
should match the Internal ID
found in VMware Aria Operations at Environment → Inventory → Adapter Instances → <ADAPTER_DISPLAY_NAME> → <ADAPTER_INSTANCE> in the rightmost column.
The Internal ID
column is not displayed by default. To display the Internal ID, enable the Internal ID
column by clicking the lower left 'column' icon and then checking the Internal ID
box.
How do I change the log level of mp-init, mp-test, or mp-build?
All SDK tools read the LOG_LEVEL environment variable to set the log level of their console output. For example, to set log level to debug to see a verbose output of the any of the CLI tools we can set the LOG_LEVEL variable:
For Linux and macOS
LOG_LEVEL=debug mp-build
For Windows
set LOG_LEVEL=debug
mp-build
For Windows, set the log level back to info
after debugging.
The SDK CLI tools support debug
, warn
, info
, and error
levels.
Collection returns '500 INTERNAL SERVER ERROR'
Internal server errors can happen for various reasons; however, the most common cause is an unhandled exception or syntax error in
the adapter's code. Check the server logs for clues about the issue. Sometimes, the problem may be detected using mp-test
and
going over the terminal output.
Collection returns 'No collection result was found'
mp-test
runs a series of validations test after collection; if the collection has no results, then each validation step will report the result as missing.
When a collection result is missing, it usually means an error occurred during collection, but the Adapter handled the error. When the Adapter handles an error,
the response contains an error message; The console displays the error message. For example:
def collect(adapter_instance: AdapterInstance) -> CollectResult:
result = CollectResult()
try:
raise Exception("oops")
#...
except Exception as e:
logger.error("Unexpected collection error")
logger.exception(e)
result.with_error("Unexpected collection error: " + repr(e))
return result
This code will output
Building adapter [Finished]
Waiting for adapter to start [Finished]
Running Collect [Finished]
Collection Failed: Unexpected collection error: Exception('oops')
Avg CPU % | Avg Memory Usage % | Memory Limit | Network I/O | Block I/O
------------------------------+----------------------------+--------------+---------------------+--------------
21.1 % (0.0% / 21.1% / 42.2%) | 4.0 % (4.0% / 4.0% / 4.0%) | 1.0 GiB | 3.24 KiB / 6.67 KiB | 0.0 B / 0.0 B
Collection completed in 0.45 seconds.
No collection result was found.
No collection result was found.
All validation logs written to '/Users/user/management-pack/test-management-pack/logs/validation.log'
As seen above, the Exception is mentioned as the reason for the collection error, and the No collection result was found
message is also shown.
Using the collection error message along with the adapter.log
can help trace the cause of the issue.
mp-build returns 'Unable to build pak file'
In most cases, this error indicates issues with building the container image. The most probable causes are:
- Unknown Instruction :
mp-build
Building adapter [Finished]
Unable to build pak file
ERROR: Unable to build Docker file at /Users/user/code/aria_ops/management-packs/test:
{'message': 'dockerfile parse error line 7: unknown instruction: COP'}
- A command inside the Dockerfile failed:
mp-build
Building adapter [Finished]
Unable to build pak file
ERROR: Unable to build Docker file at /Users/user/code/management-packs/test:
The command '/bin/sh -c pip3 install -r adapter_requirements.txt --upgrade' returned a non-zero code: 1
The solution for case 1 to fix the typo/command by editing the Dockerfile. For case 2, however, the solution might not be evident at first sight. Since the error
comes from building the image itself, we can run docker build .
in the project's root directory and look at the stack trace for clues.
VMware Aria Operations returns 'Unknown adapter type' when setting up new adapter instance
Example of an 'Unknown Adapter Type' error message for an adapter with type/key 'Testserver'. If the pak file installs successfully but errors when creating an account (adapter instance), check that:
- The Collector/Group the MP is running on is a Cloud Proxy, and
- Check that the Cloud Proxy supports containerized adapters. Containerized adapter support is supported in VMware Aria Operations version 8.10.0 and later.
I don't see an answer to my issue
If the none of the above resolve your issue, please open a Q & A
discussion on the
GitHub Discussions page
that describes the issue you are having.
You can also submit a new bug report issue here, but we recommend opening a discussion first.
Contributing
The vmware-aria-operations-integration-sdk project team welcomes contributions from the community. Before you start working with this project please read and sign our Contributor License Agreement (https://cla.vmware.com/cla/1/preview). If you wish to contribute code and you have not signed our Contributor Licence Agreement (CLA), our bot will prompt you to do so when you open a Pull Request. For any questions about the CLA process, please refer to our FAQ.
License
This project is licensed under the APACHE-2 License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for vmware_aria_operations_integration_sdk-0.5.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 46617ff56a2f4ed559dc0b9a19ef8f48598121be0270d75068f158a23e8daf74 |
|
MD5 | 774ba0fea36dc49f5f8b1b603ea90875 |
|
BLAKE2b-256 | b6a1292180bf8653fedb1e152f8cfa85bfc993e9318864703ceb15091627716e |
Hashes for vmware_aria_operations_integration_sdk-0.5.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 778cdafae1d3fbfe96fe69cabc8b5657f98f98fc2c5fb2589c70c345d8be6e2e |
|
MD5 | 7fd13c81eb7932f6db98aa8ec25e8219 |
|
BLAKE2b-256 | 228b575b37dc03f223d5737a2bcbec5c5b7427c6221e7caa53e65d7e6718e59a |