Once you have data organized as a graph, there are several ways to perform inference, which is a core capability of AI systems.
A general definition for inference is:
a conclusion reached on the basis of evidence and reasoning
For the W3C perspective, see https://www.w3.org/standards/semanticweb/inference
- improve the quality of data integration
- discover new relationships
- indentify potential inconsistencies in the (integrated) data
The integrations within kglab to support inference capabilities may be combined to leverage each other's relative strengths, along with potential use of human-in-the-loop (or "machine teaching") approaches such as active learning and weak supervision.
These integrations include:
Machine learing models in general, and neural networks in particular can be viewed as means for function approximation, i.e., generalizing from data patterns to predict values or labels. In that sense, graph embedding approaches such as
node2vecprovides inference capabilities.
The probabilisic soft logic in
pslpythonevaluates systems of rules to infer predicates.