Brief Overview on How to Use the MultiViz Analytics Engine (MVG) Library

Introduction

The MultiViz Analytics Engine (MVG) Library is a Python library that enables the use of Viking Analytics’s MVG Analytics Service. To access this service, Viking Analytics (VA) provides a REST API towards its analytics server. The simplification of access to this service is enabled by the VA-MVG Python package, which allows interaction with the service via regular python calls.

This interactive document shall show the whole flow working with the service from Python, from data upload to retrieval of analysis results for the analysis of vibration signals.

Signing up for the service

For commercial use of the service, you will need to acquire a token from Viking Analytics. For the example here, we use a built-in token for demo. Please contact us for a free trial and provide you with your token.

Python Pre-requisites

  1. Make sure you have Python 3.6 or higher installed.

  2. Install the MVG package from “https://pypi.org/project/va-mvg/”

  3. Make sure you have the following packages installed (json, matplotlib…)

In the following section we will walk through the code needed for the API interaction

Imports

First, we begin with the general imports of Python libraries that are needed as part of this project.

[1]:
import json
import os
import sys
from pathlib import Path
import pandas as pd
from requests import HTTPError

We proceed by installing the MVG library in our project.

[2]:
!{sys.executable} -m pip install --user va-mvg
/usr/lib/python3/dist-packages/secretstorage/dhcrypto.py:15: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
  from cryptography.utils import int_from_bytes
/usr/lib/python3/dist-packages/secretstorage/util.py:19: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
  from cryptography.utils import int_from_bytes
Requirement already satisfied: va-mvg in /home/tuix/.local/lib/python3.8/site-packages (0.0.0.dev0)
Requirement already satisfied: pandas in /home/tuix/.local/lib/python3.8/site-packages (from va-mvg) (1.2.4)
Requirement already satisfied: numpy in /home/tuix/.local/lib/python3.8/site-packages (from va-mvg) (1.20.2)
Requirement already satisfied: typer in /home/tuix/.local/lib/python3.8/site-packages (from va-mvg) (0.3.2)
Requirement already satisfied: requests in /home/tuix/.local/lib/python3.8/site-packages (from va-mvg) (2.25.1)
Requirement already satisfied: semver in /home/tuix/.local/lib/python3.8/site-packages (from va-mvg) (2.13.0)
Requirement already satisfied: tabulate in /home/tuix/.local/lib/python3.8/site-packages (from va-mvg) (0.8.9)
Requirement already satisfied: matplotlib in /home/tuix/.local/lib/python3.8/site-packages (from va-mvg) (3.4.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /home/tuix/.local/lib/python3.8/site-packages (from matplotlib->va-mvg) (1.3.1)
Requirement already satisfied: pyparsing>=2.2.1 in /home/tuix/.local/lib/python3.8/site-packages (from matplotlib->va-mvg) (2.4.7)
Requirement already satisfied: python-dateutil>=2.7 in /home/tuix/.local/lib/python3.8/site-packages (from matplotlib->va-mvg) (2.8.1)
Requirement already satisfied: cycler>=0.10 in /home/tuix/.local/lib/python3.8/site-packages (from matplotlib->va-mvg) (0.10.0)
Requirement already satisfied: pillow>=6.2.0 in /home/tuix/.local/lib/python3.8/site-packages (from matplotlib->va-mvg) (8.1.0)
Requirement already satisfied: six in /usr/lib/python3/dist-packages (from cycler>=0.10->matplotlib->va-mvg) (1.14.0)
Requirement already satisfied: pytz>=2017.3 in /home/tuix/.local/lib/python3.8/site-packages (from pandas->va-mvg) (2020.5)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3/dist-packages (from requests->va-mvg) (1.25.8)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/lib/python3/dist-packages (from requests->va-mvg) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/lib/python3/dist-packages (from requests->va-mvg) (2.8)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests->va-mvg) (2019.11.28)
Requirement already satisfied: click<7.2.0,>=7.1.1 in /home/tuix/.local/lib/python3.8/site-packages (from typer->va-mvg) (7.1.2)

We follow by importing the MVG library.

The library documentation is available at https://vikinganalytics.github.io/mvg/index.html.

[3]:
from mvg import MVG

We begin by instantiating a “session” object with the MVG library. A session object basically caches the endpoint and the token, to simplify the calls to the MVG library.

NOTE: Each TOKEN is used for Authorization AND Authentication. Thus, each unique token represents a unique user, each user has it own, unique database on the VA-MVG’ service.

[4]:
ENDPOINT = "https://api.beta.multiviz.com"
# Replace by your own Token
VALID_TOKEN = os.environ['TEST_TOKEN']
[5]:
session = MVG(ENDPOINT, VALID_TOKEN)

We now check if the server is alive. The hello message contains the API version:

[6]:
hello_message = json.dumps(session.say_hello())
hello_message

[6]:
'{"api": {"name": "MultiViz Engine API", "version": "v0.2.0", "swagger": "http://api.beta.multiviz.com/docs"}}'

Sources and Measurements

Before we begin, we will ensure there are no previously existing sources and if there are, we will delete them.

[7]:
sources = session.list_sources()

print("Retrieved sources")
for src in sources:
    print(src)
    print(f"Deleting {src['source_id']}")
    session.delete_source(src['source_id'])
Retrieved sources

The example below revolves around a sources with source_id “u0001”.

For convenience, this source and its measurements are available with the package distribution.

You can retrieve the data from our public charlie repo https://github.com/vikinganalytics/va-data-charlie.git

[8]:
!git clone --depth=1 https://github.com/vikinganalytics/va-data-charlie.git
Cloning into 'va-data-charlie'...
remote: Enumerating objects: 494, done.
remote: Counting objects: 100% (494/494), done.
remote: Compressing objects: 100% (403/403), done.
remote: Total 494 (delta 96), reused 434 (delta 87), pack-reused 0
Receiving objects: 100% (494/494), 70.52 MiB | 1.91 MiB/s, done.
Resolving deltas: 100% (96/96), done.
Updating files: 100% (955/955), done.
[9]:
# Path to the source folder
REF_DB_PATH = Path.cwd() / "va-data-charlie" / "charlieDb" / "acc"
REF_DB_PATH
# Definition of the source_id
SOURCE_ID = "u0001"
SOURCE_ID

[9]:
'u0001'

Creating a Source

A source represents a vibration data source, typically a vibration sensor. Internally, in the analytics engine and data storage, all vibration data is stored under its source. In essence, a source is an identifier formed by - the source ID - metadata with required fields - optional arbitrary customer specific ‘free form’ data belonging to the source.

The vibration service will only rely on the required fields. The free form data is a possibility for the client side to keep together source information, measurements and metadata which may be interesting for the analytics built-in features in the service. Examples of the free form data include location of sensor or the name of the asset which is mounted on. As we will see later, timestamps are internally represented as milliseconds since EPOCH (Jan 1st 1970), for that reason it is good practice to include the timezone where the measurement originated in the metadata.

There are two ways to create a source:

[10]:
# OPTION 1: Writing the meta-data directly when creating the source
SOURCE_IDP = "u001"
meta_information = {'assetId': 'assetJ', 'measPoint': 'mloc01', 'location': 'cancun', 'timezone': 'Europe/Stockholm'}
session.create_source(SOURCE_IDP, meta=meta_information, channels=["acc"])
session.get_source(SOURCE_IDP)
# We delete this first example source to show the second option
session.delete_source(SOURCE_IDP)
[11]:
# OPTION 2: Uploading the metadata from a json file
src_path = REF_DB_PATH / SOURCE_ID
meta_filename = src_path / "meta.json"
with open(meta_filename, "r") as json_file:
    meta = json.load(json_file)
session.create_source(SOURCE_ID, meta=meta, channels=["acc"])
session.get_source(SOURCE_ID)
[11]:
{'source_id': 'u0001',
 'meta': {'assetId': 'assetA', 'measPoint': 'mloc01', 'location': 'paris'},
 'properties': {'data_class': 'waveform', 'channels': ['acc']}}

List sources

We can now check if our source actually has been created, by listing all the sources in the database. This function provides all the existing information about the source.

[12]:
sources = session.list_sources()
sources
[12]:
[{'source_id': 'u0001',
  'meta': {'assetId': 'assetA', 'measPoint': 'mloc01', 'location': 'paris'},
  'properties': {'data_class': 'waveform', 'channels': ['acc']}}]

Uploading a Measurement

Now that we have created a source, we can upload vibration measurements related to the source. The information needed to create a measurement consists of - sid: name of the source ID to associate the measurement with a source. - duration: float value that represent the duration, in seconds, of the measurement to estimate the sampling frequency. - timestamp: integer representing the milliseconds since EPOCH of when the measurement was taken. - data: list of floating point values representing the raw data of the vibration measurement. - meta: additional meta information for later use by the client but not to be processed by the analytics engine.

In this example, all the measurement data is stored as csv and json files, where the timestamp is the name of each of these files.

[13]:
# meas is a list of timestamps representing the measurements in our repo
meas = [f.stem for f in Path(src_path).glob("*.csv")]
[14]:
# We iterate over all of elements in this list
for m in meas:

    # raw data per measurement
    TS_MEAS = str(m) + ".csv"  # filename
    TS_MEAS = REF_DB_PATH / SOURCE_ID / TS_MEAS  # path to file
    ts_df = pd.read_csv(TS_MEAS)  # read csv into df
    accs = ts_df.to_dict("list")  # convert to list
    print(f"Read {len(ts_df)} samples")

    # meta information file per measurement
    TS_META = str(m) + ".json"  # filename
    TS_META = REF_DB_PATH / SOURCE_ID / TS_META  # path
    with open(TS_META, "r") as json_file:  # read json
        meas_info = json.load(json_file)  # into dict
    print(f"Read meta:{meas_info}")

    # get duration and other meta info
    duration = meas_info['duration']
    meta_info = meas_info['meta']

    # Upload measurements
    print(f"Uploading {TS_MEAS}")
    try:
        session.create_measurement(sid=SOURCE_ID,
                                   duration=duration,
                                   timestamp=m,
                                   data=accs,
                                   meta=meta_info)
    except HTTPError as exc:
        print(exc)
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1570186860.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1570273260.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1570359660.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1570446060.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1570532460.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1570618860.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1570705260.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1570791660.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1570878060.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1570964460.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1571050860.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1571137260.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1571223660.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1571310060.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1571396460.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1571482860.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1571569260.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1571655660.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1571742060.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1571828460.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1571914860.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1572001260.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1572087660.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1572177660.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1572264060.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1572350460.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1572436860.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1572523260.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1572609660.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1572696060.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1572782460.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1572868860.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1572955260.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1573041660.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1573128060.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1573214460.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1573300860.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1573387260.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1573473660.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1573560060.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1573646460.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1573732860.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1573819260.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1573905660.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1573992060.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1574078460.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1574164860.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1574251260.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1574337660.csv
Read 40000 samples
Read meta:{'duration': 2.8672073400507907, 'meta': {}}
Uploading ~/mvg/docs/source/content/examples/va-data-charlie/charlieDb/acc/u0001/1574424060.csv

Check if we actually created the measurements by reading them.

[15]:
m = session.list_measurements(SOURCE_ID)
print(f"Read {len(m)} stored measurements")

Read 50 stored measurements

Analysis

We begin by listing all the features available in the service.

[16]:
available_features = session.supported_features()
available_features
[16]:
{'RMS': '1.0.0', 'ModeId': '0.1.1', 'BlackSheep': '1.0.0', 'KPIDemo': '1.0.0'}

In this example, we will show how to request the RMS and ModeId features to be applied to the previously defined SOURCE_ID. The BlackSheep feature is aimed to population analytics. You can read about how to use it in the “Analysis and Results Visualization” example.

We will begin with the ‘RMS’ feature, which provides the RMS for each signal. We proceed to request the analysis to the MVG service.

[17]:
RMS_u0001 = session.request_analysis(SOURCE_ID, 'RMS')
RMS_u0001
[17]:
{'request_id': 'bc4ea1194b4929764a08e97c60360cb6', 'request_status': 'queued'}

The requested analysis will return a dictionary object with two elements. The first element is a "request_id" that can be used to retrieve the results after. The second element is "request_status" that provides the status right after placing the analysis request.

Before we are able to get the analysis results, we need to wait until those results are successfully completed.

We can query for the status of our requested analysis. The possible status are: - Queued: The analysis has not started in the remote server and it is in the queue to begin. - Ongoing: The analysis is been processed at this time. - Failed: The analysis is complete and failed to produce a result. - Successful: The analysis is complete and it successfully produced a result.

[18]:
REQUEST_ID_RMS_u0001 = RMS_u0001['request_id']
status = session.get_analysis_status(REQUEST_ID_RMS_u0001)
print(f"RMS Analysis: {status}")
RMS Analysis: successful

The next feature is ‘ModeId’. The ‘ModeId’ feature displays all the operating modes over time of an individual asset. The similar procedure is repeated to request the analysis of the “ModeId” feature for our source “u0001”.

[19]:
ModeId_u0001 = session.request_analysis(SOURCE_ID, 'ModeId')
ModeId_u0001
[19]:
{'request_id': 'fae66b2396b7dde3039a2f58e287686a', 'request_status': 'queued'}

We also check the status for our second feature.

[20]:
REQUEST_ID_ModeId_u0001 = ModeId_u0001['request_id']
status = session.get_analysis_status(REQUEST_ID_ModeId_u0001)
print(f"ModeId Analysis: {status}")
ModeId Analysis: successful

We can proceed to get the results by calling the corresponding requestIds for the feature of each source.

The output of the get_analysis_results function is a dictionary and we show the keys of one those dictionaries. The keys are the same for all features and contains seven elements. These elements are: - "status" indicates if the analysis was successful. - "request_id" is the identifier of the requested analysis. - "feature" is the name of the request feature. - "results" includes the numeric results. - "inputs" includes the input information for the request analysis. - "error_info" includes the error information in case the analysis fails and it is empty if the analysis is successful. - "debug_info" includes debuging (log) information related to the failed analysis.

[21]:
rms_results = session.get_analysis_results(request_id=REQUEST_ID_RMS_u0001)
mode_results = session.get_analysis_results(request_id=REQUEST_ID_ModeId_u0001)

rms_results.keys()

[21]:
dict_keys(['status', 'request_id', 'feature', 'results', 'inputs', 'error_info', 'debug_info'])

Visualization

The MVG Library incorporates a module that facilitates the handling of the results and its visualization. Using this module, it becomes easier to convert the results into a Pandas dataframe for ease of manipulation. In addition, it enables to quickly evaluate the results by getting a summary of them or visualize them.

The name of this module is "analysis_classes" and we begin by calling it.

[22]:
from mvg import analysis_classes

The first step requires parsing the results available from the analysis. We begin by showing how to do this with the RMS feature.

[23]:
rms_results_parsed = analysis_classes.parse_results(rms_results)

From here, we can call these results into a Pandas dataframe.

[24]:
df_rms = rms_results_parsed.to_df()
df_rms.head()

[24]:
timestamps rms rms_dc dc utilization
0 1570186860 0.647086 0.662108 -0.140237 1
1 1570273260 0.647123 0.662183 -0.140420 1
2 1570359660 0.646619 0.661652 -0.140239 1
3 1570446060 0.646873 0.661923 -0.140347 1
4 1570532460 0.646643 0.661714 -0.140423 1

From the results, we can see that the RMS feature provides the rms value, the dc component and the rms without the dc component. All this is available for each timestamp.

We can request to display a summary of the results.

[25]:
rms_results_parsed.summary()

=== RMS ===
request_id bc4ea1194b4929764a08e97c60360cb6
from 1570186860 to 1574424060

+-------+--------------+------------+------------+------------+---------------+
|       |   timestamps |        rms |     rms_dc |         dc |   utilization |
|-------+--------------+------------+------------+------------+---------------|
| count | 50           | 50         | 50         | 50         |            50 |
| mean  |  1.57231e+09 |  0.611691  |  0.623692  | -0.120874  |             1 |
| std   |  1.26105e+06 |  0.0565414 |  0.0563824 |  0.0141936 |             0 |
| min   |  1.57019e+09 |  0.484564  |  0.497987  | -0.140524  |             1 |
| 25%   |  1.57125e+09 |  0.627912  |  0.637381  | -0.140196  |             1 |
| 50%   |  1.57231e+09 |  0.628307  |  0.6378    | -0.112316  |             1 |
| 75%   |  1.57337e+09 |  0.64684   |  0.661892  | -0.10966   |             1 |
| max   |  1.57442e+09 |  0.647694  |  0.662754  | -0.109065  |             1 |
+-------+--------------+------------+------------+------------+---------------+
[25]:
timestamps rms rms_dc dc utilization
count 5.000000e+01 50.000000 50.000000 50.000000 50.0
mean 1.572306e+09 0.611691 0.623692 -0.120874 1.0
std 1.261051e+06 0.056541 0.056382 0.014194 0.0
min 1.570187e+09 0.484564 0.497987 -0.140524 1.0
25% 1.571245e+09 0.627912 0.637381 -0.140196 1.0
50% 1.572307e+09 0.628307 0.637800 -0.112316 1.0
75% 1.573366e+09 0.646840 0.661892 -0.109660 1.0
max 1.574424e+09 0.647694 0.662754 -0.109065 1.0

Finally, we can generate a plot that displays these results.

[26]:
rms_results_parsed.plot()
../../_images/content_examples_brief_overview_50_0.png
[26]:
''

All these functions are available for the ModeId feature as well. We proceed to repeat the same procedure for this other feature.

We begin by parsing the results. In this particular case, we need to define the unit of time to perform the epoch conversion to datetime. The default unit is milliseconds, but we need seconds now. The timezone can also be defined, to increase the precision of these results.

[27]:
mode_results_parsed = analysis_classes.parse_results(mode_results, t_unit="s")

First, we generate the pandas dataframe of ModeId results.

[28]:
df_mode = mode_results_parsed.to_df()
df_mode.head()
[28]:
timestamps labels uncertain mode_probability
0 1570186860 0 False 0.001262
1 1570273260 0 False 0.001833
2 1570359660 0 False 0.000154
3 1570446060 0 False 0.000546
4 1570532460 0 False 0.000025

From the results, we can see that the ModeId feature provides a mode label for each timestamp, together with a boolean describing the certainty around this mode label and its probability.

We can also request to display a summary of the results.

[29]:
mode_results_parsed.summary()
=== ModeId ===
request_id fae66b2396b7dde3039a2f58e287686a
from 1570186860 to 1574424060

Labels
+----------+----------+-----------+--------------------+
|   labels |   counts |   portion |   mode_probability |
|----------+----------+-----------+--------------------|
|        0 |       17 |        34 |                 17 |
|        1 |        8 |        16 |                  8 |
|        2 |       25 |        50 |                 25 |
+----------+----------+-----------+--------------------+

Lables & uncertain labels
+------------+-----------+--------------------+----------+
|            |   portion |   mode_probability |   counts |
|------------+-----------+--------------------+----------|
| (0, False) |        34 |                 17 |       17 |
| (1, False) |        16 |                  8 |        8 |
| (2, False) |        50 |                 25 |       25 |
+------------+-----------+--------------------+----------+

Emerging Modes
+----+---------+-----------------+-----------------+-------------------+
|    |   modes |   emerging_time |   max_prob_time |   max_probability |
|----+---------+-----------------+-----------------+-------------------|
|  0 |       0 |     1.57019e+09 |     1.57114e+09 |        0.00727616 |
|  1 |       1 |     1.57166e+09 |     1.57166e+09 |        0.0230245  |
|  2 |       2 |     1.57235e+09 |     1.57442e+09 |        0.0109123  |
+----+---------+-----------------+-----------------+-------------------+
[29]:
[        counts  portion  mode_probability
 labels
 0           17     34.0              17.0
 1            8     16.0               8.0
 2           25     50.0              25.0,
                   portion  mode_probability  counts
 labels uncertain
 0      False         34.0              17.0      17
 1      False         16.0               8.0       8
 2      False         50.0              25.0      25,
    modes  emerging_time  max_prob_time  max_probability
 0      0     1570186860     1571137260         0.007276
 1      1     1571655660     1571655660         0.023025
 2      2     1572350460     1574424060         0.010912]

The summary of the results describes the number of timestamps for each mode and how many of these timestamps are uncertain. Uncertain areas appear as a gray rectangle above the corresponding periods in the modes plot.

In addition, it provides information on the emerging modes. Emerging modes describes the time (timestamp) each one of the modes first appeared. This information can be useful to identify if a new mode is affecting or appearing in the asset.

Finally, we can generate a plot that displays display the different modes over time.

[30]:
mode_results_parsed.plot()
../../_images/content_examples_brief_overview_58_0.png
[30]:
''

Lastly, we can combine the information from the RMS and ModeId features to display a boxplot of the “RMS” for each one of the operating modes.

First, we merge the “RMS” and “ModeId” dataframes.

[31]:
df_u0001 =  pd.merge_asof(df_rms, df_mode, on="timestamps")
df_u0001.head()
[31]:
timestamps rms rms_dc dc utilization labels uncertain mode_probability Date
0 1570186860 0.647086 0.662108 -0.140237 1 0 False 0.001262 2019-10-04 11:01:00
1 1570273260 0.647123 0.662183 -0.140420 1 0 False 0.001833 2019-10-05 11:01:00
2 1570359660 0.646619 0.661652 -0.140239 1 0 False 0.000154 2019-10-06 11:01:00
3 1570446060 0.646873 0.661923 -0.140347 1 0 False 0.000546 2019-10-07 11:01:00
4 1570532460 0.646643 0.661714 -0.140423 1 0 False 0.000025 2019-10-08 11:01:00

The MVG library provides additional visualization functions that can help towards this goal. Thus, we import the visualization module

[32]:
from mvg import plotting

Now, we can proceed to plot the boxplot.

[33]:
plotting.modes_boxplot(df_u0001, "rms", SOURCE_ID)
[33]:
<AxesSubplot:title={'center':'Boxplot for u0001'}, xlabel='Modes', ylabel='rms'>
../../_images/content_examples_brief_overview_64_1.png

Here we conclude our brief overview to begin using the MultiViz Analytics Engine (MVG) Library.