Public-sector authorities and organizations must make their communications universally accessible. A current EU directive requires they do so. Thereafter, any content in paper and electronic documents, on websites and apps must be generally accessible, understandable and robust.
Even if universal accessibility were reduced solely to equality for the disabled, the demand for universally accessible information is not restricted to the needs of the physically or mentally disabled. Strictly speaking, the topic of inclusion is a sidebar in the discussion. Yet barrier-free communication has multiple facets. At its core is one thing: content now must be generated and made available as intelligently as possible. That also includes the language (comprehensibility, syntax, multilingualism).
In other words, the demand is for documents that are not only "enriched" with the structural information required by the Barrier-Free Information Technology Ordinance (BITV) in Germany or the Section 508 in the U.S. and other statutes, but also with meaningful data that can be extracted and linked as desired, for example to conduct highly complex and targeted information research.
The fact is that a document's semantic quality plays an important role regardless of what the law requires. Take omnichannel communications as an example. Nowadays, the recipient dictates the communications channel. Hence, businesses not only need to separate document creation from delivery, they also have to let go of page size so the content can be easily conveyed via other media.
But that's not possible without adding detailed information to the document in route to output. That information means the IT-savvy twenty-something can read and sign a credit agreement on her smartphone; the senior gets his current pension notice via regular mail, the way he wants it; and, of course, the sight-impaired can have the screen reader read their recent power bill out loud.
It's all about "breathing intelligence" into communication: universal accessibility automatically means inclusion. Embedding structural information is known in the jargon as tagging. Creating multi-channel-capable, and therefore responsive, documents also takes care of the universal accessibility issue, practically as an afterthought.
This fact alone should be enough to motivate companies to face the issue head on. In light of big data, artificial intelligence (AI), and other current technologies, every digital transformation worth its salt already focuses on gathering and using actionable data. A discussion of standardization and automation in document and output management is useless if the needed data isn't available. If you are still rendering your pixilated documents readable via optical character recognition, you're a long way from creating intelligent documents.
The ultimate goal is not only the availability of the document itself but also the data it contains. There is ample awareness of this across industries and countries. Meanwhile, the specialist departments and even the customers themselves are setting the bar high. Marketing and sales, for example, demand increasingly detailed information so they can appeal to customers in a targeted way (automatic, selective campaigns).
AI methods can be used to easily generate the data needed – provided it is available. Nowadays it's not enough to just retrieve documents from the archive and display them. To put it bluntly, you need to be able to do something with the content – like generate specific answers to specific problems, quickly and automatically.
But the real world still has some catching-up to do. Companies still have any number of "data sinks" where some documents are stored as image files without actionable metadata. Many content themselves with rendering the archived documents readable, but do nothing to up-value the data. A "wait-and-see" attitude still prevails – which is surely also due to legacy structures in document creation. Who wants to part with proven applications and processes? Some shy away from the expense of tagging existing documents after the fact. That's one concern that is hardly unfounded.
Still, data and its application are a valuable commodity that can earn a lot of money. It forms the foundation without which digital technologies cannot unfold their potential. Data is the new oil. Google is leading the way. The company uses its new Dataset Search engine to bundle the countless providers of scientific datasets on the web to make research easier for scientists, journalists and students. Behind it lies the phenomenon of the "semantic web." It's about having available not only the text itself but the content as data that can be automatically correlated.
Only then is ongoing information research over multiple levels possible. Instead of manually searching through a document for a specific piece of information, the web readily provides the answer. These are not just simple search results but complex results that can only be generated by linking different data. If you want to know the population of Berlin in 1920, you can certainly find it on the web. But if you want to know how many inhabitants were male, female and under the age of 25, you need smarter search methods.
The semantic web is still among the most undervalued topics. But rest assured that it will permeate our entire lives in several years. German universities are already offering programs in barrier-free communication, such as the one launched in the winter semester 2018/2019 at the University of Hildesheim (near Hannover).
But it's high time to rethink document generation. The new approach: Documents are data sources that provide companies with the raw material to tap into new markets. The needed technologies are available. In the meantime there are enough applications and IT solutions that support intelligent document production. So why wait? As far as universal accessibility is concerned, you'll also be on the safe side. The approach may differ from one company to the next (complete overhaul of document generation, later inclusion of structural information/metadata or both). Only the time to start is now.
Carsten Lüdtge, a qualified journalist (University Degree: Diploma) and specialist editor, is responsible for press and public relations at Compart, an international manufacturer of software for customer communication, and is in charge of the Compart Group’s entire content management. He has PR expertise of more than 20 years with a focus on IT.