Inference capabilities¶
Once you have data organized as a graph, there are several ways to perform inference, which is a core capability of AI systems.
A general definition for inference is:
a conclusion reached on the basis of evidence and reasoning
For the W3C perspective, see https://www.w3.org/standards/semanticweb/inference
 improve the quality of data integration
 discover new relationships
 indentify potential inconsistencies in the (integrated) data
The integrations within kglab to support inference capabilities may be combined to leverage each other's relative strengths, along with potential use of humanintheloop (or "machine teaching") approaches such as active learning and weak supervision.
These integrations include:

Efforts by
owlrl
toward OWL 2 RL reasoning adding axiomatic triples based on OWL properties
 forward chaining using RDF Schema

Expanding the semantic relationships in SKOS for inference based on hierarchical transitivity and associativity

Machine learing models in general, and neural networks in particular can be viewed as means for function approximation, i.e., generalizing from data patterns to predict values or labels. In that sense, graph embedding approaches such as
node2vec
provides inference capabilities. 
The probabilisic soft logic in
pslpython
evaluates systems of rules to infer predicates. 
Using
pgmpy
for statistical inference in Bayesian networks.