The PDF Techniques Accessibility Summit’s objective is to establish a broad-based understanding of how PDF files should be tagged for accessibilty. It’s an opportunity to focus on establishing a common set of examples of accessible PDF content, and identify best-practice when tagging difficult cases.Modernizing PDF Techniques for Accessibility
The PDF Techniques Accessibility Summit will identify best-practices in tagging various cases in PDF documents. Questions to be addressed will likely include: the legal ways to tag a nested list, the correct way to caption multiple images, the appropriate way to organize content within headings.Refried PDF
My hospital emailed me a medical records release form as a PDF. They told me to print it, fill it, sign it, scan it and return it to the medical records department, in that order. In 2018? To get the form via email (i.e., electronically), yet be asked to print it? Did the last 20 years just… not mean anything! So I thought I’d be clever. I’d fill it first, THEN print it. Or better yet, never print it, but sign it anyhow, and return it along with a note making the case for improving their workflow. The story continues…Slides and video recordings of PDF Days Europe 2018
You missed the PDF Days Europe 2018? Never mind! Here you can find the slides and video recordings of all 32 stunning sessions!Using PDF/UA in accessibility checklists
PDF/UA, like PDF itself, is internally complex, but used correctly, actually makes things easier.
Extensible Metadata Platform (XMP) is a metadata framework developed around 2001 by Adobe, and the first version of its specification was published in January 2004. With one of its strengths being its seamless integration into various file formats, it has since conquered quite a number of user communities. While photographers still make use of EXIF (Exchangeable Image File Format, developed by digital camera makers) or IPTC (more strictly speaking IPTC IIM, or IPTC Information Interchange Model, where IPTC stands for the International Press Telecommunications Council), XMP is slowly taking its place as it is the more modern format, supporting for example Unicode. IPTC IIM itself is now more often used in its XMP flavour for the majority of digital photographs, rather than in its original binary format.
Also supporting the world of PDF files the first incarnation saw the light as XMP support in Adobe Acrobat 5 its flexibility made it an obvious choice for PDF/A: it appeared very well suited to keep track of metadata in the same reliable and structured way as PDF can keep track of the visual and semantic content of an e-paper document.
While PDF/A itself has turned into a huge success especially in European countries, the metadata aspect of PDF/A in the form of embedded XMP metadata seems to be more difficult to understand and to put to use than PDF/A itself.
One of the problems with metadata in general, but more specifically so for XMP, is the fact that it is not looked at very often. In Adobes Acrobat family of products, for example, it takes a number of steps to open the right dialogue to look at metadata. In addition, real world implementations that display XMP fail to attract users other than die hard engineers, and the presentation of the data and its structure is rawer then fresh sashimi. While surprisingly enough the majority of PDF files (and per requirement of the PDF/A standard all PDF/A files) do contain ample metadata, it is typically difficult to find the data fields of interest amid a number of computer generated sequences of seemingly arbitrary bytes.
At the same time, some vendors took it too far. They decided to hide the XMP-ness of metadata from the user and to present only those metadata fields they deemed worthy to meet the eye of the beholder. This took away all of XMPs flexibility, and even the tiniest need to also access custom metadata turned out to be difficult to nurture.
Intended as a means of counter balance, user interface definition languages to define custom views for custom metadata fields were invented, but their development was not for the faint hearted. Deploying them and maintaining updates for them was not easy at all, not even inside an organisation. And then, once those XMP custom panels were becoming more common and accepted by users interested in XMP, they were replaced by the next, even more potent user interface definition language, widely known as Flash. While it is unfair to always blame those who contributed something (most of the time it was Adobe developing XMP tools, and giving a lot of them away free of charge) rather than those who didnt bother to contribute anything at all, it has to be said that even the user interface implementations from the inventor and one of the most active supporters of XMP still have not achieved a state that lives up to the promise and potential of XMP.
Last but not least, the tools to manipulate metadata, or to interact with metadata beyond just reading it, often lack any degree of refinement: while everything seems possible, nothing turns out to be straightforward. There are free tools like the EXIFtool (whose XMP support is as good as that for EXIF) but the better they get the more it is likely that one needs to master the command line or shell scripting to be able to fully take advantage of them.
As a consequence, even users who tried to have a look at metadata for their files in numerous cases developed tactics to avoid looking at metadata. This is a pity, since there is no more powerful and more flexible metadata framework around today than XMP. It just needs to be vivified (quite) a bit more by making looking at it a more pleasant and also a more useful experience.
The most basic approach to keeping track of metadata is to just have names for data fields and enter simple, unstructured information items into them: e.g. a number for the size of a book or the year of publication, a small piece of text for a description and so forth. This approach is already a very powerful one, fairly robust, and also easy to implement with regards to both storing the information in databases (or spreadsheets, or text files, or ) as well as presenting it to a user for reading or changing it.
It is however a relatively limited approach when it comes to slightly more advanced types of informational items, like a list of the authors of a publication, versions of the same piece of text in several languages, or the dimensions of a rectangle. Or for the more adventurous, ordered lists of data structures, like document change events, which might include the tool used, the type of modification applied and a time stamp.
XMP can do all of that, and probably more. Rooted in RDF (Resource Description Framework) and thus in XML, it has inherited most of XMLs flexibility while not being bound by some of its limitations. In XMP it is no problem to use standard metadata fields alongside custom metadata fields. It also fully acceptable to use and combine individual entries from any number of XMP schemas just as may be necessary on a case by case basis.
What is a blessing to some can easily turn into a curse for others. Some have abused the flexibility of XMP, changing their mind about what goes where every other day. This of course kills interoperability one of the strongest parts of XMP when its done correctly. Once it comes to extracting XMP from a file and storing it somewhere, e.g. for tracking an ingested document in the database of a DMS system, just having yet another table in the SQL database wont do. XMP can be way more complex than this, and the database design needs to reflect this. Similarly, the user interface for displaying and editing XMP needs to match XMPs structural flexibility. This can impose a substantial burden on developers, unless they decide to call it a day early on and try to get away with structure by tabbed text designs.
Todays world needs more powerful metadata structuring than lists of key value pairs, and someone will have to get the job of providing the right tools done. Those who care about metadata should make a review of metadata capabilities an integral part of their decision process when buying new tools or solutions.
The PDF/A standard requires that any document metadata must be recorded inside PDF/A files in the form of XMP. But not only that: if some of the metadata fields are custom metadata fields (fields not yet specified in the original XMP specification), their syntax and meaning have to be documented inside the files XMP metadata using XMP.
While this looks like an unnecessary exercise to some, it actually follows the spirit of PDF/A. Whatever is inside a PDF/A file shall be self-contained and it must be possible, with reasonable effort, to retrieve the contents and their meaning in reasonable quality. The only prerequisites necessary to achieve this are defined in the PDF/A standard and the underlying specifications, like the PDF specification or the XMP specification. Data fields not specified in the XMP specification are considered custom metadata fields, and the meaning of such custom fields cannot be known unless some documentation is provided. As typically no repository exists that captures all custom fields ever used (and by whom), it happens to be a smart approach to put the documentation next to the data fields they describe.
He who will reap must sow
The previous paragraphs may give the impression that dealing with metadata is mostly painful and rarely ever rewarding. It must be admitted that a substantial part of the revenue from investing in metadata (and its careful storage in archived documents) may only become obvious in a number of years. Nevertheless, some advantages of its proper use may provide a return of investment relatively soon. It is important though to develop a good understanding of what is needed what will the metadata be used for and what the right tools are to implement the usage of metadata. Where metadata are to be archived inside PDF/A files, they will usually have a history of their own during the documents life cycle before being archived. Making (more) active use of a documents metadata while the document is created and used will increase the likelihood of good quality metadata worth archiving without a lot of headache.