In order to ensure semantic interoperability, in an ideal world there would be a shared understanding of what concepts and data elements mean. Mapping between terms in different ontologies or between data elements in different formats is alright, but there are deeper issues of how people struggle to represent meaning in computer systems made for others whose model of the world might not be (exactly) the same.
A striking example for the difficulty of semantic interoperability is a Linked Data challenge which sought to answer the question: “Which town or city in the UK has the highest proportion of students?“. One answer puts Cambridge first (you’ll notice the quite obvious mistakes in the data), while another sees Milton Keynes on top. Without digging too deep into the details, one can see that it’s important to make sure the definition of “town”, “city” or “student” is the same in all data sources (Wikipedia, government data…), and to formulate a precise enough query.
The nuances of meaning make a huge difference here. A casual user is unlikely to get the semantics exactly right to match these nuances. Can there be a way to design systems that copes with these intricacies, that can dynamically incorporate context-sensitive and domain-specific semantics, semantic changes over time, locally negotiated semantics as opposed to universal approaches?