STAMP™

Standardized Talent Asset Mapping Protocol

STAMP

Standardized Talent Asset Mapping Protocol

Our technology unlocks next generation learning and talent data solutions for work and education organizations and thier platforms.

Next Generation Skill Tagging

STAMP creates and manages linked data files that describe the knowledge and skills of work and education in scalable and stackable way. They power graph databases and are a method of pre-programming verified credentials. 

STAMP™ creates linked data file maps of the knowledge and skills gained from digital work or learning systems, such as courses, projects, assignments, papers, tasks, assessments, meetings, presentations, reviews etc.

A technology that creates stackable talent data files that document the knowledge and skills of work performed by people in digital systems.

Developed through collaboration with:

Our technology leverages emerging talent data standards, innovations and technology to create and combine new dimensions of talent visibility never before possible.  

The latest standards, methods, and technology:

The Next Generation of Skill Tagging:

Ontological Refraction is a process we designed in order to solve for the semantic interoperability problem of creating standardized or normalized understood meaning of terms between platforms, organizations and  industries. We needed the meaning of terms used to be universally understood no matter where the STAMPs was created, carrited to or stacked with. 

In other words, Ontological Refraction is designed to create a “Rosetta Stone” to provide key for everyone to understand the list of terms, their meanings and relationships. 

When creating a STAMP file, an ontology is needed to act as a glossary or table of elements in order to identify and attribute the correct meaning to the concepts that are sequenced. 

The resulting ontology, which is used as a lense to identify relevant terms in Concept Sequencing – acts as the “Key Signature” to the STMAP file, providing it meaning and context. 

Course groups are scanned (Using Ontological NLU HeadAI) to determine their “chemical makeup”. We generate a JSON map and weighted list of the concepts & their relationships. 

We allow that to be “refracted” off of an industry taxonomy (i.e. EMSI Burning Glass or admin’s choice) to match the concepts into interoperable industry standard modifying the JSON file.

 

By sending course material through our design that uses HeadAI as if it were a lense and an industry taxonomy as a refraction, we create a self-generated ontology of ideas that are covered in a given course category. 

That ontology can be filtered to show the parent/child relationship of ideas, which is one of several relationships the ontology records, transforming what was just words and data, into connected ideas. 

The LMS extension will allow you to select the courses you want to generate an ontology from, process, visualize, edit and publish the ontologies, which will define all course maps your LMS generates. 

 

We allow the admin to make manual edits to the hierarchy, through a visualization that groups clusters & nearest neighbors, suggests inclusions and exclusions to make it easy to visualize, edit, and publish in 5-10 minutes.

Once generated, the Ontology will be hosted in the dashboard and can be exported, integrated or re-generated.

Enterprises are charged by the number of live ontologies they have mapped and published.

See other parts of Gobekli's science & tech: ​

Just tagging isn’t enough. 

While simple tagging can help learners find courses based on keyword, and help AI – simply tagging alone doesn’t take into account the complex nature of what talent actually is.

Not only are keywords not interoperable, because they cannot be translated between organizations, because they are removed from scale, or association. 

 

Plato’s idea of a dialogue or, “Flow of meaning” believed that no idea existed in isolation, but existed through a flow and interaction with other things and actions. 

 

That is why all languages have a subject and a predicate, because it conveys a flow of meaning. 

 

Hagle expanded this further with his dialects, introducing us to the fact that every idea comprises of a theses an anti-thesis and a synthesis, which is the meaning of the two combined, which creates something new. 

 

In order to accurately create interoperability of ideas and concepts, we need to be able to use data science to transcribe metaphysics into graph data… To do this, we follow the laws of nature, starting giving it, not just a single data point, but a point that is connected to another point across a plane of time, that has depth of overlapping layers that grow over time. 

 

Our sequencing algorithm is controlled by the LMS admin settings that identify the structure front end and back end structure of the LMS to create a map of how each course scales. 

 

Simple Dashboard & visual selector tool for admins to label the LMS & page structure to map course categories or departments in 5-10 minutes. 

During “concept sequencing”, individual lessons are measured and “bifurcated” to scale.

LMS & Page elements create the structure to label and link concepts that are identified with the “Ontological Spectrometer”

 

The categories and hierarchies are noted, and format the file’s architecture/clustering. 

 

On page course description elements such as time, lesson types & experience points set the scale & identification of ratios for each Course Map File.

 

Our sequencing algorithm creates a linked data file (JSON LD) that annotates scale, context & connections of the ideas described in the modules & lesson titles. 

 

They are scaled according to the “course anatomy” & labeled according to the elemental concepts identified by the “ontological spectrometer.” 

 

Finally, the ideas are linked in context by subject/predicate, reading left to right, top to bottom. 

 

Lesson weights are each divided by the anatomy of the course already identified. 

 

Descriptions add weight and details to key concepts identified in the titles.

The Self-Generated Ontology then is used to identify the key terms in the text, which are linked together at scale using our sequencing algorithm. 

By using the ontology like a periodic table of elements to identify and label the sequence of connected concepts, you create a sequence of the knowledge and skills grown in a certain amount of learning done on any LM, creating a graph file that can be filtered to resemble DNA: 

This then creates the raw data attachment, that powers the addition to the admin screen on the course page: 

These course maps not only help learners visualize the knowledge and skills in their courses, but also attach to their certificates for decentralized ownership. 

See other parts of Gobekli's science & tech: ​

 

STAMP Data Science:

Linked, Scaled and Interoperable Talent Data Created by: