Highlights from APE 2009 – Preconference Day

423px-berlin_stabi_udl_eingang_preussische_akademie_der_wissenschaftenFrom the 19th to the 21st of January I was in Berlin to visit this magnificent city and to go to the APE (Academic Publishing in Europe) conference. From their website:

 

“APE Conferences encourage the debate about the future of scientific publications, information dissemination and access to scientific results. They offer an independent forum for ‘open minds’ with a free exchange of opinions and experiences between all stakeholders.”

 

This year’s edition was themed “The impact of publishing”. The Preconference day, which focused on information competence, was organized by Anthony Watkinson of University College London and Matthias Wahls of Brill Academic Publishers. I will focus here on what I deemed the most interesting parts. The first panel was on Licensing: collective and specific and featured Wilma Mossink from SURF and Marc Bide the incoming director of EDItEUR, who both talked about licensing schemes. Mossink focused on national and international initiatives and possibilities for cooperative licensing schemes and frameworks like Knowledge Exchange (a cooperation of JISC, DEFF, SURF and DFG) to enhance access to scholarly information on the Internet. These initiatives want to help out the smaller publishers who don’t have much money and time to invest. Knowledge Exchange serves as an umbrella organization aimed at supporting the use and development of ICT infrastructure.

 

Where Mossink went on to discuss initiatives like AGORA and HINARI and different Open Access frameworks for licensing models, Mark Bide focused mainly on the challenge of how to communicate these licenses in their new technical environment. Problems like correct interpretations of licenses and how to express licenses in machine-readable forms in order to support machine to machine communication (especially important when using a search engine like Creative Commons when searching for certain kinds of CCcopyright_symbol101 licensed material) are important to consider. The goal is to communicate clearly what you can and can’t do with the digital content the license refers to. Bide stressed that we need a method of communicating publishers policies that is flexible and extensive and that supports any (future) business model. Because, he warns, if publishers don’t care about communicating licenses, nobody will, especially not the search engines.

One of these publisher initiated initiatives is ACAP: Automated Content Access Protocol, which was launched in 2007. This is an initiative that focuses on machine to machine licensing (where for instance Creative Commons focuses on machine to person licensing communication). Like Creative Commons, ACAP is a general protocol for all kinds of content where initiatives like ONIX (books) and PLUS (photography) are more sectoral.

Bide argues that the need for standardization of licensing protocols, particularly of their semantics, needs to be recognized. Convergence is needed: there are too many standards and we need to get them out of their sectoral seclusions. It is all about the communication of licenses and permissions, not about their enforcement. This convergence of standards applies also to Open Access licenses, which are creating difficulties on how to define open access. We need to make clear with which particular kind of license content is truly Open Access and with what license it is not.

 

Another interesting panel focused on Discovery: helping users find what is appropriate and featured Jan Velterop. He focused on knewco-knowlet2what one can do to make scientific literature even more useful. Since we have access to too much information, we need to find a way to navigate the information. In order to do this, we need to connect elements of knowledge to each other: we need to publish articles and visualizations.

This requires new skills, not only for the researcher (who needs to become a real knowledge worker), we also need a change in culture. Velterop argues that we need to focus on concepts: it’s all about concepts (and thus about meaning) and not about keywords. He mentions for instance the word ‘Jaguar’. As a keyword this refers to both cars and animals. If you focus on the concept you take out the ambiguity. Now, when you search for information you want to connect these concepts. This is what, as Velterop explains, semantic relationships and semantic highlighting do. Concepts are interconnected and interlayered. Velterop’s company, Knewco, offers a free service that does exactly that, so he states.

From the Knewco website:

 

Where search engines are concerned with the whereabouts of information, Knewco is concerned with the meaning, significance and connection between elements of knowledge. Knewco offers – free – services based on putting knowledge from different and disparate sources together, in a conceptually coherent and consistent way. The knowledge is disambiguated, redundancies are removed, and keywords and terms are normalized into concepts.”

 

As Velterop explains in his lecture, Knewco wants to remove the ambiguity and redundancy. They do this by making so called ‘smart triplets’ in concept spaces on an ontological, observational and hypothetical level. Velterop argues that by using this functionality, scientific publishers can add semantic functionality to their material by way of highlighting.

As Velterop concludes: the basic idea is just to make relationships between stuff, to bring back the serendipity, enabling you to find the things you did not even know you were searching for. This introduces new ways of thinking about information and this technology can have direct consequences also for the way researchers might be writing their articles in the future. You can find more about Knewco here and here.

 

More information about the APE conference will follow soon.

One comment

Leave a comment