A possible future of text Scenario
Frode Hegland 2018
This scenario does not require any advanced technology to build, only imagination and the support of necessary infrastructures. It is meant to be illustrative and hopefully inspirational, not prescriptive or literal. This scenario discusses only basic text interactions, other data forms will necessarily be included in the documents but are out of scope for this brief presentation:
Joe, the demonstrator from Doug Engelbart’s 1962 paper†, sits down to give you a demo of what this system can be like. First, he clears his workspace so that all he has a big beautiful blank screen of about 27 inches of high resolution display waiting to be filled...
Out of habit from when he was walking around earlier this week and thinking about this research, he speaks to his operating system AI, asking it to show him all the research on the topic he is interested in, which is basically a keyword search based on title and tags.
Knowledge Space ‘Sculpture’ of Documents
The screen fills with text representing the documents, far too many to read. He then shrinks them, with a pinch on his trackpad and ranks them by age on one axis and number of citations the document has received on the other axis. The space is in 3D, though normally defaulting to 2D unless the third dimension is actively used, to ensure the best rendering the text. He goes through a few shapes like this, choosing to have them ranked in the third/z dimension by certain keywords and so on. He plays around in this view perhaps a little too long, it’s quit a compelling way to make sculptures, so he sets a view he has used for research before, just modifying a few of the keywords. Now this is something he can relate to.
With this well known sculpture space he chooses to add a few more criteria, in the form of keywords off to the side. These keywords have colours and lines project into the documents which feature the keywords, larger lines where there are more occurrences.
He seems something interesting, some of the keywords which should actually be at odds appear with great frequency in a bunch of documents on one side of the sculpture. He tells the system to only show those documents (he could do this by selecting the keywords and hitting ’s’ on his keyboard or by speaking ‘show only these).
He then pinches to see the sentences in the documents which feature the keywords and then he expands them to see what sections they are in. This is interesting but he would like to know what those articles are about so he expands them to see the last sentence of each articles’ summary, something he has learnt over time is a good measure of the point of the article.
This view is a bit cramped now so he hits the space bar to summon any selected article to front and centre where he can read them easily. He easily dismisses those which are not interesting and adds textual, illustrative or spoken annotations where he feels something special is. He connects sections of different articles at will and when reading an interesting sentence summons any and all articles in the group which has a similar or related sentence.
This is quite fun he things but there are a few terms he has a feeling he does not understand so he summons dictionary entries for a few of them but they have so many definitions they are not useful. He then chooses to see which have hyperGlosssry entries created buy the authors and lists the definitions for quick and clean reading. A curios thing is that most of them actually share the same hyperGlossary but a few documents do not, making them outliners somehow. There are too many terms he wants to understand and they all connect so he then ‘lifts’ them into a new dimension (visually illustrated simply by fading everything else out to a high degree). This view looks much like a concept map space like he would have made on paper many years ago but it’s actually what’s now called a Knowledge Graph space.
In this space he can explore the relationships between different concepts and definitions. He can choose the visual layout and the glossaries used. He spends some time here and then steps back to the workspace. OK, has a clue as to what’s going on, so it’s time to do a spot of authoring.
New ‘document’. He starts typing his introduction. It’s all a bit cluttered so he expands his text document to fill the screen. He goes on to explain what he has learnt and as such needs to nip back out to the full view so does so by using the ESC key to toggle view (this is an advanced workspace but it functions perfectly well on modern keyboard, trackpad and mentor setups). Any view he has seen can be quickly and easily inserted for the future reader to have access to the same space.
This is all a bit much so he decides to explode his document into a multiform, multidimensional space. Here can easily rearrange his document, chose to see headings or headings and body text, connections and more.
Joe decides that he has something which he is ready to share so he decides to publish it.
Publishing involves going through a few, entirely optional modules, several of which are bundled into one experience for him, such as checking for grammar, spelling, accidental plagiarism and doing a basic automatic knowledge graph to check on coherence. He also has a chance to decide what meta information to strip from the published document or to include, such as his undo timeline (off by default) and links and annotations.
He also gets an automatically generated summary where he can check whether the document actually says what he thought it would. He finds one sentence on the summary which is not what he meant, clicks on it and he then sees everything in the document which contributed to that sentence. He fixes it, re-publishses and it looks just fine.
The document is then processed to include anchors for high-resolution linking so that others may link to specific sections in the work, not just the whole document.
He also has an archival option to publish with all associated explicit (web pages and linked documents etc.) and implicit (dictionary and wikipedia articles etc.) included in a compressed companion file for people in the future to better re-contextualise the work.
Once done the document goes onto his organisations online repository, complete with the repository having full access to everything inside the document in order to help with future queries and analysis by anyone (anyone who has access permissions), as well as distributed to several other servers as documents locked and with the document name as the ID for anyone to link to so that link rot won’t spoil the connections. This is of course analogous to how people in the past would cite a journal article and it would not matter where the articles were, any instance of the article would do.
When someone reads the resulting document they will have access to much richer interactions based on the included meta-information than what is possible when reading a PDF or a Word document.
This is one view, many are possible but we can only find out which are more powerful, pleasurable and effective by building and testing. Simply discussing it is like discussing theoretical flavours of ice cream. Let’s taste them! :-)