Release 24.02 of the CareAR Core and AI Platforms delivers the following:
- Visual Verification
- EB: Generative AI Image Creation
- EB: Intelligent Search upgrade
- Instruct Web App: Responsive Layout on Desktop and Tablet Browsers
- Additional EB/Instruct Improvements
- New link structure introduced for new Experiences
- Instruct Web App: Improvements to player lighting
- Export Instruct Session Data
- Assist Recording Retrieval and Playback
- Assist Recording File Reorganization on AWS S3
- Formal support for Microsoft Edge (portal, Assist web app, Instruct web app)
- Additional Assist Improvements
- Assist web app audio improvements
- Prep: Add Allow Screen Share to Roles-based Privileges framework
VISUAL VERIFICATION
CareAR Instruct’s Visual Verification feature is a game changer for field service operations. By utilizing advanced computer vision and machine learning technologies, Visual Verification provides a reliable solution for ensuring accuracy and compliance in the field.
Visual verification utilizes computer vision AI-based object detection that is trained on your own custom data set to visually detect objects (such as equipment, parts, etc) and their states (position, on, off, in, out, etc). When inserted into an Instruct experience, the visual verification step can act to confirm the step was completed correctly. This can be beneficial for ensuring safety compliance, procedural accuracy, and improved work quality.
Visual verification includes a full pipeline for creating custom data sets, efficiently labeling the data set, training an ML model and testing the ML model. The pipeline is designed so that a business can rapidly create custom ML models for object detection in a fraction of the time compared to traditional methods. In addition to the ML model creation pipeline, the Experience Builder can be used to rapidly incorporate the ML model into a workflow that supports technicians and customers through step-by-step instructions reinforced with AI Visual Verification.
Process
- Scan – Scanning is done using the CareAR app scan functionality. This makes it each to quickly create an image data set that will be used to train an ML model.
- Label – Once the scans are complete, a user can login to the CareAR portal object detection interface and label the data set using our 3D tools. This allows the user to label an object once and have the entire scan of images instantly get labeled in one step.
- Train – Once the scans have been labeled, they can be used to train an ML model for object and state detection.
- Validate – The model is automatically validated during the training process; however, it is recommended to test the model in a real-world environment to verify detections are occurring correctly.
- Build – The ML model can then be built into a workflow using Experience Builder. This allows the user to configure the desired behavior when detections occur.
- Publish – Once the experience is complete, the user can publish the experience which deploys the experience and ML model to be used by anyone with access to the experience.
IMAGE ELEMENT: GENERATIVE AI IMAGE CREATION
Generative AI technology is now available to assist content creators in making Instruct experience within the Experience Builder. Using the image generator feature content creators can insert images into an experience that are appropriate, aesthetically pleasing which limits the need to look for images online that may contain a copyright.
The image element now contains an option to AI Generate an image. Click on AI Generate and then enter a prompt which describes the image you wish to be created. Once submitting the prompt up to 4 images will be presented. Simply click on the desired image to select it and insert into the image element.
INTELLIGENT SEARCH UPDATE
CareAR’s Intelligent search has undergone a re-architecture. It has been redesigned on a modern serverless architecture that will deliver better scale and performance. It has also received a significant improvement in ease of use for creating search indexes. The search index creation process has been dramatically simplified resulting in only three input fields to build a search index. You only need to supply 1) Index name, 2) URL(s) to be index, and 3) Optionally any excluded URLs you don’t want to be indexed. After saving the search index configuration it will immediately begin indexing the configured websites.
NOTE: You should only create search indexes for web sites that you have permission to index. Attempting to index websites without permission may be viewed negatively by the website owner.
Legacy Intelligent Search Index
With this update users will no longer be able to create search indexes using the legacy Intelligent Search architecture. New index creation must use the new Intelligent Search architecture. Previously created legacy Intelligent Search indexes will continue to work, but users should recreate their search indexes in the new Intelligent Search as soon as possible. The Legacy Intelligent Search indexing function will be deprecated later in 2024.
INSTRUCT WEB APP RESPONSIVE LAYOUTS
We’ve enhanced the Instruct web app presentation on desktop and tablet form factors. Now the presentation is made with proper responsive layout. We’ve implemented three sizes (Small, Medium, Large per diagram below) to let the viewer adjust the browser window dimension. The result is a more pleasantly proportioned output.
ADDITIONAL INSTRUCT/EXPERIENCE BUILDER UPDATES
New Experience Links
As of release 24.02 of the platform, all new experiences created will have a new link structure. This change lays the foundation for removal of a technology dependency that will be deprecated in August 2025.
Player Lighting Enhancements
For experiences containing a 3D model (i.e., GLB file), we introduce directional lighting enhancements on the 3D model to reveal more details and shadows on the object. The net effect is an even more visually appealing 3D object in your experience.
Instruct Session Exports
Users interested in being able to export Instruct session data can now do so with this release. The file format of the export file is JSON, due to the widely varying nature of experience usage data. This file format is familiar to many programmers and IT personnel and can be easily imported into other BI systems, visualization tools, data viewers/explorers, or third party tools.
ASSIST RECORDING ENHANCEMENTS
To enhance the Assist recording feature, we’ve introduced how we write recording files to storage and provide the ability to play back recordings when logged into the CareAR admin or user portal.
AWS S3 File Structure
Prior to release 24.02, an Assist recording file is written to the customer's AWS S3 bucket. While this approach ensures maximum control over the recording files (CareAR knows where they are initially written), it makes finding and accessing those files exceedingly difficult... especially across a large set of CareAR users.
This feature changes the format structure of new files written to the AWS S3 bucket to improve navigation and access of the files by the bucket administrator.
In 24.02 and beyond, the Assist recording file are written into a folder hierarchy as follows:
tenant_id/group_id/username/
... where group_id is the user's group. If a user is not assigned to a group, we set the group_id path with the string "unassigned" .
In-Portal Playback
We now provide the ability for a user to playback an assist recording while in the CareAR portal.
To enable CareAR to access the assist recordings on the customer's AWS S3 bucket, the tenant admin needs to enable permission in the Connect Center.
Once enabled, a tenant user can click on a recording link in Analytics > Session Activity for immediate playback of the video.
This enhancement works on pre-24.02 recording files and all new recording files created.
Configuration Steps
- Open the Amazon S3 connector
- Enable the check box option, Enable Playback in Portal
- Click Save
FORMAL SUPPORT FOR MICROSOFT EDGE
Numerous enterprises require their users to use Microsoft’s Edge browser when accessing the Internet. In some markets (e.g., Japan), it is the second most popular web browser for desktop.
With release 24.02 of the core platform, we introduce formal support for Microsoft Edge for:
• CareAR’s user and admin portals
• CareAR Assist for Browser
• CareAR Instruct for Browser
ADDITIONAL ASSIST UPDATES
Screen Share Prep: Allow Screen Share
Not all CareAR’s enterprise customers necessarily want to enable screen share (even though it will be available). In prep for the screen share feature , the roles-based privileges is updated to include Allow Screen Share as a new option. You can find this framework at Admin portal > Users > Roles .
Against each role, a check box is presented. If the check box is checked then a user with that role will be permitted to use the screen share feature.
If unchecked, user will see tool button greyed out and informed that they are not enabled for this feature.
Screen Share will debut as a feature in release 24.02 of the CareAR App.
Assist Web App Audio Improvements
With release 24.02 of the Assist web app, we upgraded the core SDK and tuned gain parameter to improve audio quality. The result is improved audio in Assist sessions involving participants using the web Assist app. The improvement is especially noticeable for users of Apple’s Safari browser.