APE 2010: The Semantic Desktop and the Article of the Future

During the second and third day of the APE conference three presentations were given which focused on services that would aid the information worker through the information overload. They were all based on filtering out the essential information through web based services and interfaces centered on the (scholarly) user of these information resources. Andreas Dengel gave a presentation on the semantic desktop – a device which can serve to supplement a user’s memory. As Dengel states, in our information overloaded society we need instruments that help us to find and get the relevant information we need. One of the main problems in this respect is that we need to know more than we may remember. As Dengel states, the semantic desktop was thus designed to help the knowledge worker find and order the relevant information. As he says, today many activities are focused on single information items. What we have on our desktop is thus a kind of temporarily memory. The question is how we can make this into an active agent, into a vivid Memex. How can we use modern technology to implement something similar? For computers are most of all lacking in the capabilities of human knowledge to be associative and to put things into perspective. Computers can read the information they have to process but they cannot understand it. In this respect there is a cut in our thinking between our minds and the desktop. A document can however also be perceived as a key, which, while reading, opens a system of links to other documents, to events, locations and tasks.

Referring to Kant’s well known adage ‘imaginations without terms are blind and terms without imagination are empty’, Dengel states that in the semantic triangle we refer to the reality (what is going on), the signs and symbols we know to represent that reality and our imagination (what we read). RDF’s, or the enhanced resource description framework provides the basis for describing meaning via ontologies. An ontology is in this respect nothing more than a vocabulary to express facts about the world: subject, predicate, object. In this respect a fact is expressed as a subject-object triple. These are entities that represent something. Subjects, predicates and objects are given as names for entities. But the question is, how can we provide a shared vocabulary?

As Dengel explains, he semantic desktop offers an evolutionary approach towards the semantic web. It is a form of ontology based document understanding in which the individual network of thoughts leads to a multi-dimensional and multi-perspective organization of content. For this reason, amongst others, we need to think of new concepts of archiving. The semantic desktop works via hyperlinks where the email content is related to existing knowledge via semantic hyperlinks. It can be defined as followed:

“A Semantic Desktop is a device in which an individual stores all her digital information like documents, multimedia and messages. These are interpreted as Semantic Web resources, each is identified by a Uniform Resource Identifier (URI) and all data is accessible and queryable as RDF graph. Resources from the web can be stored and authored content can be shared with others. Ontologies allow the user to express personal mental models and form the semantic glue interconnecting information and systems. Applications respect this and store, read and communicate via ontologies and Semantic Web protocols. The Semantic Desktop is an enlarged supplement to the user’s memory.”

The final problem remains however, according to Dengel, how we can integrate this technology in an efficient way into Gutenberg’s world. You can find more information on the semantic desktop here

A second presentation by IJsbrand Jan Aalbersberg, focused on Elsevier’s article of the future, which takes on a task-oriented view to get away from the paper paradigm (you can find some examples of such a future article at and also by looking here). Elsevier’s article of the future, very much like the semantic desktop described before, looks at the context of readers and the tasks readers perform. According to Aalbersberg, instead of adding the data to the text we could better integrate the growing amount of data in the main content/context because the amount of data is growing to fast. In this respect we need to work with communities, we need to be user centered to find out what they want to do with the data. In the article of the future every article gets a tabbed view (like a table of contents, but it also at the same time gives an ordered view on the parts). Most interesting is that every article gets a graphical abstract which represents the main message via visual input. Key results (what is really achieved in this article and what is new) are highlighted. Audio and video materials are used to explain the context (for instance via video abstracts). The article of the future also caters to the multitasking individual where one can be reading the text and have the audio/video on in the background. The author-affiliation is also highlighted, where the institute and the author determine the credibility of the article. According to Elsevier’s user research the most popular item in their article of the future would be the clickable figure which could be used to navigate to sub-sections via click and jump. Unfortunately the technology has not yet advanced far enough to offer this function as of yet Aalbersberg states. For at the moment it still entails way to much work for the author and it delays production.

Elsevier’s article of the future also concentrates on a data-focused presentation or summary where one can get an independent view of text, figures and (zoomable) captions. Supplementary data is not presented as a separate file but can on request be integrated (it slides in) into the article. There is also the possibility to do some real-time reference analysis, where references can be sortified by date, author or journal. One can even see the whole sentences where those references are being made so that people can really see the context. As Aalbersberg explains, the prototype of this experiment was very much based on the present form of the article (so not on the semantic possibilities). The idea was thus to really make a better presentation of the current article. 

Dan Pollock from presented a similar experiment based on searching, discovering and sharing for and of research results. Nature also focuses on a user centered world where the journal is increasingly being deconstructed and a disintermediation is taking place between the author and the publisher. As Pollock states, Nature’s main goal is to improve search by focusing on precision. By offering Open Search (based on XML, bibliographical and index searching) this offers new ways for machines to share search results and to build functions on top of these results. Nature also focuses on a revised user interface/article presentation. Pollock also mentions resources like NatureEvents, semantic markup services (where one can click on compounds to enter entity pages), and Nature’s focus on sharing and mobile devices, like for instance the Nature application they offer for the iPhone. According to their user research the articles presented via this app on the iPhone read quite good. Nature also offers offline services where the article can be consulted directly from the iPhone without a direct Internet connection. Another service Nature offers focuses on blog aggregation, giving a credibility stamp. As Pollock states, what Nature does, is they aggregate blogs and clean up their references and hyperlinks. In this way they make the content useful, making use of their function as publishers. Nature also involves the user by integrating Mashups based on Google wave: through real-time collaboration and editing in the cloud via the Igor application in which they focus on authoring productivity tools. In the future Nature wants to take this sharing aspect even further using User Generated Content on the (online) Nature Network and by offering a Nature workbench, a personalized webpage on (alla iGoogle).

            These and other talks delivered at APE 2010 will soon be made available at


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Open Reflections is created by Janneke Adema



%d bloggers like this: