Synesthesia

Notes on stuff

Tagged Posts: Knowledge_Management

The Architecture of Personal Knowledge Management – 1

Back in July Harold Jarche posted a useful deconstruction of the processes involved in web-based personal knowledge management (PKM). Building on this, and in order to make a lot of implicit stuff in my head explicit, I’ve started developing the model into a full mapping of processes to tools.

I’ve chosen to use Archimate as a modelling language, and as I develop the model offline I will be posting views of it to pages liked from this wiki page.

Harold’s model looks like this:

As I began to unpick Harold’s seven processes I realised that although they are primarily focused on “self”, one key aspect to understand them is to identify the different roles that “self” (and “others”) play. This aspect of the model so far is shown in the Introductory View :

Alongside the work of developing models for each of the processes, I began to develop a view of the key information artefacts manipulated by the PKM processes.

I’ve also created pages on the wiki for the first iteration at modelling the  individual processes, linking them down to a core set of application services, and over the next couple of weeks I’ll write blog posts for those.

Comments welcome to help refine this modelling effort.

Links Roundup for 2007-01-03

Filed under:

Tags: , ,

04-01-2007

Shared bookmarks for del.icio.us user Synesthesia on 2007-01-03

Links Roundup for 2006-03-22

Filed under:

Tags: , ,

23-03-2006

Shared bookmarks for del.icio.us user Synesthesia on 2006-03-22

More about conversations and processes

I’ve a hunch that the conceptual models discussed in  Jeremy Aarons’ new paper, (as I summarised here) could be a useful lever for unpicking the dilemma I found when I wrote that I prefer conversation, but you need process.

In that post I was drawing on conversations with (amongst others) Earl, Taka, Jon  and Ton about the apparent conflict between the desire we all feel as empowered, “wierarchical” knowledge-workers to have systems that support a collaborative and improvisational working style, compared with the rigid, dehumanised processes that many companies see as a necessary corollary of delivering consistent service.

The particular paradox is that some of us (ok, me!) have on many occasions required companies (typically suppliers of services) to demonstrate those sorts of processes in order to satisfy our demands for clarity and measurability, even though we recognise that we may at the same time be preventing them from delivering the sorts of innovation that would truly delight us.

I find that the Davenport model helps me understand what is going on here – the underlying assumption of companies that apply prescriptive processes seems likely to be that the work involved is on the left-hand side of Davenport’s diagram – the Transaction and Integration models.

Davenport-small

The underlying assumption has to be that the nature of the problems that are faced in these areas do not require interpretation, rather the application of rules and standards, possibly requiring multiple areas to work together but always within a set of rules. This is almost exactly the model under-pinning frameworks such as ITIL.

The other thing that strikes me as I read the contents of the boxes in the model are that they match closely with some of the criteria that are used in job grading systems. The boxes at the left of the model contain descriptions which are usually associated with lower-graded roles. This would seem to support my assertion from experience that companies which base their core competency around deployment of such rigid processes are primarily concerned with containing costs and at the same time guaranteeing minimum levels of service from a transient workforce.

Work that can be described by the right-hand side of the model (e.g. Collaboration and Expert models) is typically well-rewarded by job-grading schemes, pragmatic evidence that such skills are in relatively short supply. Professional services firms typically focus on reserving the efforts of these people for critical projects of areas requiring significant interaction. Such firms often also have (or desperately need) a core competence in taking the intellectual products of the right-hand side and “operationalising” them, i.e. turning them into formal processes and standards that can be scaled up and applied by the more numerous group of people paid lower wages to work “in the left-hand side”.

So far, so good – perhaps not a comfortable conclusion, but it would seem that the model works at least acceptably in certain situations. There is a certain basic business logic in reserving your most highly-skilled people for problems that need their attributes, whilst at the same time finding ways to manage the routine at a lower cost.

So where does the paradigm break?

I think there are at least two areas worthy of further exploration:

  • There is an assumption that the market such firms supply will largely pose routine problems which are amenable to a rules-and-standards approach – where does this break down?
  • Secondly, underlying the concerns that were expressed in the earlier conversation is a belief or hope that by finding a more integrative approach to knowledge work then there is the potential of finding ways that are more rewarding in either a commercial or human sense.

 Ideas for later posts…

Integrating thinking and doing

Filed under:

Tags:

21-03-2006

Jeremy Aarons has blogged the draft of a new paper, Supporting organisational knowledge work: Integrating thinking and doing in task-based support by Jeremy Aarons, Henry Linger & Frada Burstein.

They start by referencing Davenport’s classification structure for knowledge-intensive processes, which analyses knowledge work along the two axes of complexity and interdependence:

Davenport-small

Davenport’s classification structure
(From
Davenport (2005) via Aarons (2006))

 

However they then go on to criticise this as an analytic model on the grounds that much complex work often fits into more than one box. In particular, they suggest that work which (by the Davenport classification) is largely within the Integration Model often has elements requiring significant precision and judgement from indivduals – in other words mixes in work from the Expert model.

They suggest then that a more appropriate guiding framework is Burstein and Linger’s Task-Based Knowledge Management, which considers knowledge work as an inherently collaborative activity which mixes pragmatic “doing” work into a conceptual “thinking” framework. In this approach the focus is on supporting rather than managing knowledge-work. The authors express this using the following diagram:

 

Taskbasedmodelofwork


A task-based model of work
(From Aarons (2006)

The rest of the paper is devoted to a case study within the Australian Weather Service which supports the mixed approach, and yields examples of failed business systems which focussed only on the forecast-production aspect of the forecasting task. These are compared with a successful and hugely-popular system which started as a maverick, ground-up project and which expressly addressed and supported the creation and maintenance of conceptual models of weather. This system, which is now the system of choice, only addressed the production of output forecasts as a piece of auxiliary functionality.

More on Business Strategy Patterns

Filed under:

Tags: , ,

08-03-2006

Allan Kelly commented on my post from last year about the possibilities of using pattern languages to describe business strategies, to point out that he has done quite a bit of this already.

So far the only paper I’ve had a chance to read is Business Strategy Patterns for The Innovative Company, which is a set of patterns derived from “Corporate Imagination and Expeditionary Marketing” (Hamel and Prahalad, 1991). In this Allan derives:

  • Innovative Products
  • Expeditionary Marketing
  • Seperate Imaginative Teams

Apart from the patterns themselves there were two things I found interesting about this paper:

Firstly, Allan describes a rather rough ride he received at VikingPLoP 2004, where apparently a lot of negative attention was focussed on whether there was “prior art” for these patterns in the pattern field. I think there is something here that any autodidact will feel an empathy towards. Whereas the scientific community (rightly) puts a lot of emphasis on whether something is new knowledge, in the world of applications there is at least as much value in “new-to-me” knowledge, or even “applications of existing knowledge in a new context”. To me patterns and pattern language fall firmly into the camps of education, application and transference between domains; not the camp of new knowledge creation. Given that, an over-obsession with “prior art” would seem to be rather inward-looking.

Secondly, Allan goes on to elaborate how his understanding and view of patterns has developed and changed, especially as a result of reading “The Springboard” (Stephen Denning, 2001), and “Patterns of Software” (Dick Gabriel, 1996) and that he now sees them as a particularly-structured form of story about a problem domain. I find this an appealing viewpoint, as it harks back to the fundamental way that human beings pass on knowledge, through the telling of stories. Of course, the nature of stories is that each person who retells a story does so in a subtly different way, and over time the story changes. Extending the simile, patterns too will change over time in a two-way exchange of knowledge between the pattern and the environment of the current user, so to say that a particular pattern is derived from (but not the same as) an earlier pattern is merely to state that evolution has occurred.

Update: Allan’s latest paper Strategies for Technology Companies has more on his interpretation of patterns as stories.

Links Roundup for 2006-03-06

Shared bookmarks for del.icio.us user Synesthesia on 2006-03-06

I prefer conversation, but you need process

Filed under:

Tags: , ,

06-03-2006

I think I’ve just caught myself out in a “one rule for me, another for you” attitude over something… A conversation across several blogs made me realise that I was facing both ways on an issue and hadn’t acknowledged it – oh the power of the internet!

Earl Mardle posted about Information Architecture as Scaffold based on a conversation with Ton (More on Ton’s position here). The gist of the view expressed by Earl and Ton is that all this “knowledge” that companies are seeking to “manage” is really only accessible through relationships, and once the relationship is established then the information that was part of the initial exchange is no longer relevant:

And that, my friends is what information does; it provides the scaffold that bridges the gap between people. A bridge that we call a conversation. And once you have built the bridge, you can take away the scaffold and it doesn’t make any difference, the conversation can continue because it no longer has any need for the information on which it was built, it has its own information; a history of itself, on which to draw and whenever the relationship is invoked, it uses any old bits of information lying around to propagate itself.

Earl then expands his view that in the real world of work, when you need to create some kind of output, you do it based on your own knowledge and the knowledge of your team, rather than through re-purposing some previous piece of corporate “knowledge”.

Several of us joined in the conversation in support of the view – in particular I made the point that the key thing that stands in the way of re-using the typical corporate knowledge artifacts (i.e. documents) is the lack of contextual information about why they were created in the way they were. A good provider of context would be a record of the conversations that happened around the document creation (e.g. through blogs and wikis) but that is still too difficult to add on if it requires people to learn new tools.

As a good counter to all this virulent agreement, Taka disagrees strongly with the concept of information as scaffolding around conversations – in his view the information is the conversation, the scaffolding is the network of relationships that enable the conversation. That’s probably a difference of opinion over the meaning of words, where it gets interesting is what Taka goes on to say:

This is what I call the McDonalds question: how do you get low-skilled, inexperienced trainees to consistently produce hamburgers and fries to an acceptable level of quality? Process. And it’s the same thing in a corporate environment: how do you get people, who generally don’t really give a toss about what they’re doing, to write proposals and reports and all the other guff to an acceptable level? Document templates and guidelines.

Coporate KM and other such initiatives are our typically short-sighted attempt to find technical solutions to what is actually a people problem. There are plenty of people selling solutions and processes and methodologies to “fix” the information management issues that exist within companies because it’s an easier problem to tackle than the real underlying issue: how do you get people to actually give a damn about what they’re doing?

Which Earl extends and restates;

Underlying what I was talking about in the other post is to make explicit that very fact; organisations that think of their people as fungible will be lead inexorably down the path of document management and “knowledge capture” solutions that will not help them survive, and they don’t deserve to.

The kicker for all this came from Euan Semple the other night who told me about a company rep who asked him, “how do you stop corporate knowledge leaving with the person?”

So, to reiterate a point that might have been a bit buried in the verbiage, organisations with a future do not need KM systems because they have active, engaged people who know what the hell they are doing.

And that is where I did the metaphorical forehead-slap.

Because I’m all for work practices based on conversation and shared context where they involve me or my colleagues – of course we are wonderful knowledge-workers who thrive in such an environment! But, as I realised, when it comes to speaking with suppliers of IT services, or designing how our organisation should inter-operate with their organisations, it’s always about process.

In part that’s about how they work, and when I am in that purchasing role it’s not directly my concern about how they can deliver good consistent service to the company I am representing, rather a matter of being sure what they deliver, but I’m sure we throw out quite a lot of baby with that bath water. We struggle to find ways of getting the sort of human, responsive service we want at a price we are prepared to pay.

So why is this a problem? The clue is in the words I used – “good, consistent service”. The whole world of out-sourced services companies is about consistency. The way services are usually measured –  “x% of faults fixed within y hours” – is about aggregation, statistics, removing variability. The companies who supply these services, in their turn, are looking for ways to meet those contractual arrangements that allow them to make a profit. The major costs in any service are the people who deliver it, so inevitably there is downward pressure on salaries and a drive to make everything a process that can be automated as far as possible.

In that sense, modern out-sourcers truly are the last bastions of Taylorism. Almost as a foregone conclusion, there is low job satisfaction in these bastions of “service”, leading to high turnover of front-line staff, leading in turn to increased management pressure for process and consistency.

I think there are several conflicts at work here:

  • Be consistent v. Delight the customer
  • Maximise productivity by using low-skilled staff v. Maximise productivity by supporting people to use all of their skills and knowledge
  • Protect the service against staff turn-over v. Protect the service by creating an environment where people want to stay and grow
  • Get the lowest cost service from suppliers v. get service that truly helps your business
  • and probably some more…

The simple answer to all of this seems to be “work in small teams” and only use small suppliers, but it’s not clear to me how that scales. When I think about small teams, I can see how a wirearchical approach works when there are several companies involved (in the limit, several individuals), but again, I feel various mental blocks when I think about scaling that. I’m still struggling with these, and other dichotomies, which is probably a good sign that it’s time to draw the CRT! Food for a later post I suspect.

Links Roundup for 2006-02-28

Shared bookmarks for del.icio.us user Synesthesia on 2006-02-28

A new tool: Awasu

Via Earl Mardle I’ve found a new tool to add to my personal knowledge management toolkit: Awasu

Although the core of the product is an aggregator, it’s a lot more than that as it offers a number of ways of inter-acting with the flow of information through the tool, both manually and in various automated ways. It also offers the facility to add “channel hooks” – plugins which carry out specific actions on selected channels.

Having installed the product, I must admit the first learning hurdle was to get used to a thick-client aggregator rather than my normal approach with Bloglines.

The next challenge was finding an easy way to blog using the tool. Although Earl recommends a workflow using Qumana, I’m not sure that’s the right one for me. I think that reticence is a little about Qumana: I’ve tried the tool before, in its earlier days and didn’t stick with it, so maybe I am transferring that to the latest version. Also, Earl’s proposed method involves using the Workpads and Reports in Awasu – functionality that I have played with, but not yet got to grips with fully. There have been a couple of funnies which might be bugs or might be configuration problems.

I shall keep experimenting with different methods of using the tool and integrating it into my work, and may well come back to the approach earl suggests. In the interim I have taken advantage of the easily-configurable User Tools menu in Awasu to call up the normal WordPress posting page for this blog within the Awasu main window, pre-populated with key content from the source page.

Links Roundup for 2006-02-06

Shared bookmarks for del.icio.us user Synesthesia on 2006-02-06

A synchronicity of KM?

Filed under:

Tags:

20-09-2005

Dave Pollard has written about the psychology of information, or why we don’t share stuff, the organisational and human factors that impede knowledge-sharing:

  1. Bad news rarely travels upwards in organizations
  2. People share information generously peer-to-peer, but begrudgingly upwards, and sparingly downwards in organizational hierarchies.
  3. People find it easier and more satisfying to reinvent the wheel.
  4. People only accept and internalize information that fits with their mental models and frames.
  5. People cannot readily differentiate useful information from useless information.
  6. The true cost of acquiring information and the cost of not knowing are both greatly underestimated in most organizations.
  7. People know more than they can tell, and tell more than they can write down.
  8. People can internalize information presented graphically more easily and fully than information presented as text, and understand information conveyed through stories better than information presented analytically.
  9. Most people want their friends, and even people they don’t know, to succeed, and people they dislike to fail and this has a bearing on their information-sharing behaviour.
  10. People are averse to sharing information orally, and even more averse to sharing it in written form, if they perceive any risk of it being misused or misinterpreted.
  11. People are generally reluctant to admit they don’t know, or don’t understand, something.
  12. People don’t take care of shared information resources.
  13. In some organizations, internal competition mitigates against open sharing of information.
  14. Some modest people underestimate the value of what they know.
  15. We all learn differently.
  16. Rewards for sharing knowledge don’t work.

The point, of course, being that it’s almost nothing to do with the technology.

At almost the same time, David Weinberger has published an article in KM World about the impact of the social software tools that Euan has managed to sneak in “under the radar”… where again the emphasis has been on using the lightest-possible technology to support conversations.

Interesting juxtaposition.

Whose folksonomy is it?

In how to build on bubble-up folksonomies Tom Coates says:

[...] The concept is really simple – there are concepts in the world that can be loosely described as being made up of aggregations of other smaller component concepts. In such systems, if you encourage the tagging of the smallest component parts, then you can aggregate those tags up through the whole system. You get – essentially – free metadata on a whole range of other concepts [...]

and goes on to play with ideas for aggregating tags on radio songs into folksonomic descriptions of aggregates of those songs (radio shows, albums) and aggregations of aggregations (a radio station, an artist’s body of work).

Reading it I was struck by a link to something I wrote about a year ago on semantic aggregation and filtering (I’m using aggregation to refer to a slightly different thing in that post) – so from that I would add to Tom’s idea the possiblity for allowing new tags to be added to describe different entities in the aggregation – e.g. directly tagging the shows as well as using tags derived from the tags applied to the songs.

Tom goes on to suggest that by using the links between these emergent tags you could lead people to new-to-them material that reflected the best example of things they may like – “best” being determined in a Wisdom-of-Crowds-like way by the station’s listeners.

The concept makes immense sense from the perspective of a broadcaster that is seeking to create new metadata about material, and to provide listeners with the most engaging experience.

From the perspective of a listener though, I’d like another layer. Alongside the “transmitter-side” aggregation of metadata from the broadcaster based on the tags submitted by their listeners, I’d like a “receiver-side” metadata aggregator that aggregates my tags across all the media I’ve ever listened to over time – and on top of that a way of comparing “my” folksonomy with “their” folksonomies so that I can find new artists or stations that I am likely to enjoy.

Credit Where It’s Due

Inside Knowledge has a great article on the work my friend and colleague Euan Semple has been getting up to. He introduced me to blogging, so I’m really pleased to see him getting the sort of profile he deserves.

Blogroll Additions – KM blogs

Jack Vinson has helpfully listed over 20 Knowledge Management blogs that he reads regularly. I already had about half of them on my sources list, I’ve now added Conniecto, How do you know that?, The Pragmatics of KM Equals Success, Knowledgeline, Mopsos, Myndsi, Networks, Complexity and Relatedness, …no straight lines…,; Reflexions, Scrapbook of My Life, SoulSoup, x28′s Blog and yet another f*$#&@! learning experience

Connecting People With Content

Filed under:

Tags: ,

09-03-2005

Shawn Callahan points to his own white paper Using Content To Create Connections Among People [PDF] that advocates (in a style accessible to the non-techie) the use of blogs, feeds and aggregators as a more flexible solution (compared with a grand “knowledge repository”) to sharing knowledge within a company and between a company and its customers.

The freedom to think

When I wrote this article I started from a belief that by combining Denham’s thoughts with my earlier post I had seen a new aspect of the possibilities for knowledge aggregation and filtering.

Then I read the background links that led to this addition, added in a spirit of “Oh, perhaps it wasn’t that original after all, I’d better acknowledge this other work”. In other words some of the glow of achievement I felt about spotting the earlier idea had been tarnished.

Today via Phil Jones’ wiki I found this David Weinberger post from a few years ago which has restored some of the good feelings – others may have had a similar idea before but that doesn’t reduce the value to me of the new-to-me thought.

Testing Compendium and the Illusion of Explanatory Depth

At the suggestion of Marc Eisenstadt I’ve been trying Compendium.

The tool itself seems relatively straightforward (I have used both cognitive mapping and mind map software before so this may not be a fair assessment of how a beginner would get on) – the trick I suspect is in learning a methodical approach to applying it to a specific task.

I experimented trying to map out the exchange of views in the recent “Hierarchy” exchange (1 2 3 4 5 6 7) [order may not be quite right] between Dave Rogers , Jon Husband and Euan Semple but ran out of steam partway through analysing the second post. I don’t think that is a comment about Compendium, more a facet of the difficulty of mapping this sort of writing especially when you are very rusty at that sort of thing.

This will be the problem with creating the semantic web, it’s completely conceivable to have nice well-formed RDF triples as a way of navigating information that is already structured but the vast majority of human knowledge is tied up in messy human-written text.

My gut feeling is that most of us, most of the time, don’t analyse information to the depth that is needed to make good use of a tool such as Compendium. Certainly my tendency is for a strong degree of pragmatism in my learning – I’d suggest that generally knowledge-workers dig just enough to get a sufficient gist of things for the immediate purpose – as long as I have good enough knowledge for the task in hand then why seek more precision?

The willingness to stop digging could be increased by the illusion of explanatory depth. This tendency for people to over-estimate their knowledge of a subject where there are attractive intuitive explanations was identified in 2002 by Frank Keil and Leonid Rozenblit. I’m probably doing it now of course!

The next area to try Compendium will be working the other way – assembling a set of facts or assumptions about the world and seeing if it helps extrapolate meaningful abstractions. The obvious application of this will be in strategy development.

Wiki page for evaluation notes: [wiki]Compendium[/wiki]

Social categorisation – whose perspective?

Denham Grey has been thinking about knowledge management for a long time – it looks like he has been turning his thoughts to some of the issues I touched on in Semantic Aggregation and Filtering. He writes in Social Categorisation:

The ability to develop and share a common taxonomy / classification / ontology is a very fundamental knowledge practice that leverages knowledge creation, communication, promotes meaning and enables sense-making.

Tools to do this are far and few right now but likely to be moving toward center stage in the near future…

He adds a fourth mechanism for extracting and sharing a taxonomy

The starting point for this advance may be tools to extract key concepts from free form text.

Imagine if you wrote a text, ran a key concept parser, compared the extracted concepts to your groups ontology then selected the best fit meta-tags for later search and browsing – Now that would really assist content sharing!

to which I would add another nuance – as well as deploying these tools to categorise your own text how about deploying them inside a feed aggregator with mapping rules based on the reader’s frame of reference – this way in addition to using the author’s taxonomy you could decide how to categorise a piece of content in the reader’s context.

Update: From this article via Denham’s wiki it looks like there has been a lot of work in this area already…

Semantic aggregation and filtering

Dale Pike has some interesting things to say about semantic focus as an organising principle for understanding technology – in particular for explaining how a specific aspect of some arbitrary technology helps with specific tasks. The down side of this, he observes, is that tools tend to become pigeon-holed by the application that is first used to explain them – seeing the tool in a different context might enable new uses but for many people there is a cognitive barrier set by the first mental model they have created.

He extends the thought to consider how context modifies the use we can make of specific pieces of information – as an example notes that are contributed to a topically-focused space such as a bulletin board or mailing list contrasted with the same note expressed in an individually-focused space such as a weblog. He sees syndication formats such as RSS as the connecting bridge that allows people to assemble published information into unique contextualised views that serve their specific needs.

This idea seems to be teasingly close to what I have described as projections of knowledge – each context is a map of the knowledge space projected in a particular way. Beyond the raw mechanics of content feeds the key to assembling projections/views is being able to find and select the information you want in an automatable way. The problem is to determine which concepts are “close” to each other on the map in question.

Most approaches that I have heard of use categorisation and filtering as a proxy for measuring conceptual proximity. Whether you use shared taxonomies or the more emergent “folksonomy” approach a mechanism is needed to determine which labels are close to each other within the map of choice.

I can imagine this happening in a number of ways.

  • At the most basic level tools could use some shared thesaurus to identify synonomous labels.
  • An enhancement would be to allow the user to view a set of available labels and identify their own associations – this could in turn be published to allow “association aggregators” to form emergent thesauri.
  • Even more subtle would be to allow the user to modify the view parameters by assigning votes to the returned concepts.

I have a hunch that all of this is buildable with currently-available standards. There may be tools out there already but I suspect they are proprietary – what we need are the simple building blocks to allow a “small pieces loosely joined” solution.


Wiki page: [wiki]SemanticAggregator[/wiki]

Why wiki doesn’t work – one person’s experience

Filed under:

Tags:

28-10-2004

I’ve been introducing wikis into my workplace, especially for project teams – not in any forced way but more by making the technology available and starting to use it. Understandably the takeup is mixed but I was most surprised by the very strong aversion expressed by another senior technology manager with whom I have to produce complex joint strategy documents. Last week the opportunity came up to ask “why doesn’t wiki work for you?”; I was expecting answers that were about the difference between a web interface and a wordprocessor, or perhaps issues about the markup but what he told me had nothing to do with the technology and everything to do with mental models.

I’ve written before about how blog and wiki fit together for me and my mental model of document outlines and mindmaps as two dimensional projections of knowledge. In short I’ve found that for capturing and structuring “flow of thought” ideas in a way that can later be linked together the easy hypertext writing style of wikis works very well – I find myself thinking in terms of hyperlinks as I write.

By contrast my colleague finds the typical collection of wiki pages with dense hyperlinks very difficult to map into a mental structure of information. Under some gentle questioning he explained that for him information is always hierarchical – the technique he has found that works for him when organising knowledge is to think in “high level” concepts and then expand these down into details – the sort of model that fits very well with traditional outlining or a mindmap where you are not allowed to cross-link between branches.

What for some people is a strength of wiki – that any given page can appear in many different contexts depending on the relationship of hyperlinks – is for him a disadvantage of almost show-stopping proportions (certainly enough to make it too much effort to switch to the tool) because it is impossible to see a single clear hierarchy of information.

The obvious workaround that we will try for areas where we have to work jointly is for him to write “his stuff” in outline form and for me to construct index pages that present a view into “my stuff”; as we work on joint editing we can then pull together further pages that present different hierarchies.

I can sense a few vague ideas starting to bubble about how the tool itself could be changed but they aren’t making themselves articulable yet.

Projections of knowledge

Filed under:

Tags:

05-10-2004

Several people have blogged Global Knowledge Review, the new venture from David Gurteen.

I’m thinking about a longer post on my reactions to the whole document, however in passing wanted to flag something that Lilia wrote in the sample copy that caught my eye. (update – Lilia has pointed me to her original post that spawned this article)

It is probably a matter of personal preferences or thinking style, but I always have problems with tree structures. [...] Another example is about mind-mapping tools [...] Those that I tried force me to organise my ideas into a tree structure. Of course, visualisation is nice to get an overview of ideas (especially if you use it for others), but forced tree structure makes these maps useless for (my) thinking. I tried to use mind-mapping software to structure my ideas for writing papers, but it didn’t work. It’s fine on paper for drawing a web of relations and thinking about steps of explaining them, but drawing a tree on my screen doesn’t make any sense [...] for me ideas live as webs. [...]

Reading this I was struck by the simile with map projections – just as rendering a 3D world onto a 2D map causes distortions, rendering an interconnected web of ideas into a two dimensional tree (e.g. a mind map or an outline) will focus on a different aspect of what is being mapped.

How does our choice of view for information affect the interpretation we place on that information?

How blog and wiki fit together (for me)

In the same post that I just blogged Johnnie Moore goes on to say:

Traditional models of group thinking seem based on me trying to cement my well-formed brick of thought to your well-formed brick. Increasingly, I find much more satisfaction in sharing the less-formed ideas and responses I have to conversations. I sense that by doing so, it’s possible to create some sense of joint intelligence that can get beyond existing mental models.
I suppose that my blogging process tends towards bricks, as I write down ideas and get to tweak and edit them and improve them, to make them more palatable to the outside world.

For me this is the nub of why I need a blog plus me-writable and world-writable wikis.

Blog posts by their nature are a snapshot at a point in time and therefore imply some form of stasis. Wiki pages however are timeless and hence never finished, always open to flux.

I’ve found the writing style that has started to evolve since I had this combination of tools is to scatter thoughts around the wiki-spaces until some juxtaposition forms that is sufficiently clear to create a blog-entry. The blog-entry becomes a picture of my thinking at a point in time and therefore essential to mapping out some kind of path. The state of the wiki pages continues to evolve – by looking where there is activity you can see which parts of my mental associations are currently to the forefront of attention.

Mental models and the ladder of inference

Johnnie Moore is thinking about changing mental models , in particular how to ensure that group work really does take advantage of the collective intelligence of the group rather than falling back to s simple comparison or accumulation of everyone’s individual world view.

This reminded me of the work published by Chris Argyris, Peter Senge and others on the [bliki]LadderOfInference[/bliki] . I wonder how we could encapsulate this thinking into the world of the blog?

The Power of Context

Amy Gahran writes about the power of context – How Arranging Ideas Spawns New Ideas – to stimulate new thoughts around a subject:

No idea exists in a vacuum. It is connected to related ideas, and to the real world, and to other people’s perspectives. Those connecting threads of context are where the vast creative potential of the human mind lies. cite=”http://blog.contentious.com/archives/000288.html”

The idea that the mind works associatively is pretty well established – amongst many other things it’s the key behind mind mapping. Making public some of my own associations I can see a connection between Amy’s thoughts, Tony Goodson’s Butterfly moments and bricolage (worth noting that Tony is a fervent advocate of mind mapping) and the ideas I tried to capture here, in particular:

The benefits of any specific piece of knowledge are not always forseeable until the right combination of circumstances and other people arises – in other words unpredictable emergent behaviour;

Another possible connection is to The Social Origins of Good Ideas

Where Amy particularly extends our thinking is the way she then derives some very specific ideas for enhancements to knowledge management tools that would take advantage of associative thinking:

* Random elements [...]
* Visual juxtaposition [...]
* Embedded brainstorming tools
* Sticky notes (that capture context for the thought) [...]

There’s an interesting challenge for developers here but not an insurmountable one I think… Just needs someone with the skill to hang together a few existing tools perhaps?

In a sense a blog entry like this is a form of the fourth item (“Sticky notes”) because it captures an idea and via a combination of hyperlinks and the use of trackbacks captures a a lot of the context as well – but it’s not exactly fast – how many ideas slip by before you can grab the idea and it’s context? I think we need a system that treats “ideas” as some kind of atom and deals with the messy business of collecting and managing URIs in the background.

For embedded brainstorming tools could someone integrate Freemind with a bliki?

Are there any open source developers out there who feel inspired by this?

Collaborative Note-taking

Steph Booth gives a really clear explanation of Taking Collaborative Notes at BlogTalk
[via Chocolate and Vodka]

Unpredictable Emergence of Learning

Filed under:

Tags:

10-05-2004

Interesting synchronicity of posts that have caught my attention in the last 24 hours

Yesterday I blogged Suw Charman’s thoughts about being a generalist / polymath, in particular the tendency for useful real-world knowledge to come out of the unique overlaps between fields created by the particular experiences of one person.

Earlier today I linked to Tony Goodson on Butterfly Moments and Bricolage – his experience that general tinkering about across various subjects and ideas often leads to unexpected benefits at later times.

And now I’ve just read George Por writing on How local meetings with global experts can boost CI in which he advocates cross-fertilization of generative ideas and transformative practices, across organizational cultural and geographic boundaries and goes on to advocate horizontalization of learning in a given domain between those who have been giving more or less attention to explore and contribute to that domain – that to me sounds like conversations between specialists and generalists

George ends his post by extolling the virues of asynchronous methods such as blogs to make the best of face-to-face conversations.

So what is the link between these entries?

For me there are several:

  • Generalist / Polymath learning exists, contributes knowledge and helps the horizontal distribution of knowledge;
  • The public, linked, asynchronous nature of blogs and related technologies both exposes conversations to a wider pool of people and helps the ideas start to flow before any face-to-face meeting;
  • The benefits of any specific piece of knowledge are not always forseeable until the right combination of circumstances and other people arises – in other words unpredictable emergent behaviour;

Update

[bliki]Fragmentation And Wholeness[/bliki]

Polymaths

Filed under:

Tags:

09-05-2004

Suw Charman writes a long article on the benefits (and also the drawbacks) of being a generalist.

I can empathise with what she says – I too have a “grasshopper” mind – this I think is why I find blogging and wikis useful – by allowing the grasshopper to leave a track as it jumps where it may these tools help the more reflective parts of mind to see progress within each area.

I know that many aspects of my work draw on skills I have learned from a range of experiences – for example my approach to coaching is hugely benefited by my understanding of systems and feedback, on the other hand my abilities with a project team are helped by my coaching. Nothing is wasted, it is all a part of the complete package you bring to a job.

The problem lies, I think, with the question of measuring or proving this skill. Our entire academic system of qualifications is built around narrower and narrower specialism as you push forward the boundaries of knowledge in one particular area.

How do you measure achievement or knowledge creation that relies on new syntheses?

Weblog Scenarios

Weblog Scenarios. Dale Pike identifies seven useful scenarios for weblogs in a professional knowledge context.

Cyclical knowledge development

Ian Glendenning (Psybertron Knowledge Modelling WebLog) points to some articles from disparate domains on cyclical approaches to building knowledge:

I’ve cross-filed these in the Action Research category because I think they may have relevance to Reflection 2

Wiki Lessons Learned

“Sam Ruby”:http://www.intertwingly.net has posted the “slides”:http://intertwingly.net/slides/2004/etcon/ from his “presentation”:http://conferences.oreillynet.com/cs/et2004/view/e_sess/4613 at “ETCon”:http://conferences.oreillynet.com/etech/ on lessons learned from running the “!Echo wiki”:http://www.intertwingly.net/wiki/pie/FrontPage He notes:

bq.:http://intertwingly.net/slides/2004/etcon/20.html If you have a coherently aligned and focused community, a wiki can be a very powerful thing, allowing collaboration to proceed at an astounding pace.
If you have a community in imperfect alignment, a wiki will accurately reflect this state. Given a group with a genuine desire to align, a wiki can provide a powerful and positive feedback loop.
But what happens when you have an unbounded community with divergent goals?

He also mentions the enormous energy that has gone in to the project, resulting in over 1000 pages on the Wiki – some of that energy is deliberately disruptive or destructive – resulting in the need for a role he describes:

bq.:http://intertwingly.net/slides/2004/etcon/37.html In addition to host, a role that I have played is one of lightning rod. A number of hurtful and untrue things have been said about me, and the company I work for.
“A grounded metal rod placed high on a structure to prevent damage by conducting lightning to the ground.”
Note the recurring theme of energy production, absorption, and dissipation…

He compares the characteristics of “mailing lists”:http://intertwingly.net/slides/2004/etcon/39.html and “blogs”:http://intertwingly.net/slides/2004/etcon/49.html with the wiki; flags the importance of “snapshots”:http://intertwingly.net/slides/2004/etcon/52.html ;and concludes with the following lessons:

# “Time counts”:http://intertwingly.net/slides/2004/etcon/59.html
# “Cultivate contributors”:http://intertwingly.net/slides/2004/etcon/63.html
# “Use a mix of strategies”:http://intertwingly.net/slides/2004/etcon/64.html

It strikes me that there are some good candidate “collaboration patterns”:http://www.synesthesia.co.uk/blog/archives/systems/000336.php here – I’ll play around on the “Synesthesia wiki”:http://synesthesia.co.uk/tiki/ and blog when I have some drafts…

Coaching As Knowledge Creation

I was talking about the coaching process with my Coaching Supervisor. we were discussing the implicit power-relationship in coaching (Expert – Novice) and how we could work with any positive aspects of that and reduce any negative aspects.

I wondered if it was useful to think of the coaching process as a form of mutual learning – or indeed as a form of mutual knowledge creation…

continued on the wiki

Making meaning

“Making meaning”:http://denham.typepad.com/km/2004/01/making_meaning.html Denham Grey explains how we come to share meaning and the relation between meaning, understanding, ontology and knowledge.

Actionable Knowledge

Filed under:

Tags:

02-12-2003

Ton, Lilia, Dina and Gary have been discussing how to turn blogs into actionable knowledge.

Amongst the attractors of the conversation are frustration at not taking the loose ends of blog-nurtured ideas further;

I do have a feeling that I’m not responsive enough in picking up the thoughts we dream up here in the blogosphere and turn them into action. The blogs reveal emerging patterns, and we can nurture the memes we think important, and block or criticise the ones we think are not.
But I seem to be less succesfull at moving stuff from the complex and un-ordered realm (to adopt some of Dave Snowden’s vocabulary) where my addiction is fed, to the more ordered realm of the knowable and practice.

and equally a concern that we should not close off interesting avenues through premature crystallisation into action:

The loose ends offer me a sense of the possible, a landscape that can go anywhere, a sense of adventure that keeps coaxing me back to explore a little more. I wouldn’t want it tidied up in a tight focused and deadlined bundle because I know, philosophically, to do so would require closing off many of these possibilities, discarding the undiscovered territories.

After I’d let these posts mull around in my mind for a day or two, the first thought that came to me was this – just because I don’t neccessarily blog about actions I have taken as a result of blog-inspired knowledge creation, that doesn’t mean there wasn’t actionable knowledge created!

We all make decisions (often subconsciously) about what to blog and what not to blog. For many people (myself included) the most potent area of such decisions is around our relationship to our employer (or clients for the self-employed)

My “day job” and most of my coaching work are both in the context of the same organisation – an organisation that has a very high public profile and puts strong confidentiality clauses in our contracts… I’ve had conversations with other bloggers who work in the same place about how we tread that line between bringing insight from the things we do there whilst keeping ourselves employed – for most people this comes down to not explicitly naming the place and generalising sufficiently from things that happen so that specifics cannot be identified.

The second aspect of work-based discretion relates to the very nature of the work – particularly in my case to coaching – for obvious reasons I am not going to relate things on a public site that could be identified by a specific coachee.

The third area of discretion relates to friends / partners, children – although many people do blog about their personal lives I choose not to.

But just because I blog carefully (or not at all) about those areas of my life does not mean that I don’t derive actionable knowledge from blogging that I can apply to those domains. The dilemma though is how to report that back? Some actions won’t make it through my blog-filters; others may be delayed or distorted; in either case there is a break in the learning cycle with my blog learning colleagues.

This is not about the trust I have in the people with whom I have blogosphere conversations, it is more about who else is eavesdropping. Is there any way to resolve this whilst still using an open channel? I’m not convinced there is – the contradiction we need to resolve is that a completely public channel will inevitably cause us to filter what we write, whilst part of the power of the blogsosphere is the opportunity to discuss ideas with people from very different contexts. As Lilia said:

I said to a couple of people on my first Skype round that I wish to be able to get many of us to work together at the same place, but I guess it’s not feasible :) And even if it would be I don’t think it would work well: the power of our joint discoveries comes from “weak-tied” nature of our connections, different backgrounds, different countries and different lives. Still, sometimes I wish to know easy ways to turn weak ties into strong ones, at least for the time needed to develop ideas that worth it.

I wonder if the more sophisticated Wiki tools would help here – the ones that allow sections to be made secure? Or some other way of easily forming a secure group that is (paradoxically) open and easy to use for those in that group?

Mapping the process of Knowledge Making

Filed under:

Tags:

16-10-2003

A couple of weeks ago Spike Hall wrote about Mapping Knowledge-Making Efforts – inspired by Liz Lawley‘s criticism of the short attention span of the blogosphere he proposed a web-based tool to co-ordinate longer term collective knowledge making efforts.

In a comment to that earlier entry I expressed interest balanced by a concern that there are significant socio-cultural and emotional influences operating in blogging which urge us to a set of behaviours I would now summarise as read fast, skim the surface, post often. I suggested that we should look to the “Rules of Discourse” for the new tool that would be necessary to balance out those influences and create the behaviour Spike is seeking.

In a followup article Spike builds on that comment to ask:

What”rules of discourse”[standing for wired in structure and processes, decision-making rules, etc.] will take care of such issues as :

* a) attracting, educating, recognizing/rewarding, assigning and, for that matter, retiring players,
* b) folding player knowledge products into a meta-knowledge corpus,
* c) signaling depth and frequency of change to knowledge consumers, players and underwriters
* d) critically evaluating product as it is developed
inspite of the presence of natural entropic counter-forces to the contrary??

In thinking about this I was reminded of Coase’s Penguin – Benckler’s paper on the application of lessons from the Open Source software movement to a generalised model of the Peer Production of knowledge. The two key principles Benckler identifies are the appropriation model (i.e. how do participants extract economic value from their work) and related issue of how the rights to the products of production are assigned.

Benckler identifies that peer production models are best suited to environments where the contributors are moving towards an indirect appropriation model – e.g. an increase in reputation from contributing to a body of knowledge, that reputation leading to increased opportunities for making money e.g. from consultancy etc. If the system is designed around this model then the intellectual property rules within the system have to prevent the situation where a sub-set of members claim ownership of the direct output, thus killing the production process.

So before we can answer Spike’s first question I think we have to ask about the motivating factors of our expected participants.

When we move on to consider recognition, signalling and evaluation I think it would be fruitful to look at other community moderation schemes – for example
Kuro5hin and Slashdot. Tom Coates has written a couple of recent articles on moderation and has just set up a site specifically “designed to find creative ways to manage online communities and user-generated content” so I think a little mining of his ideas might be fruitful too…

Seven Survival Tips for Knowledge Managers

Dave Pollard offers Seven Survival Tips for Knowledge Managers

  1. Focus knowledge and learning systems on ‘know-who’, not ‘know-how’
  2. Introduce new social network enablement software and weblogs to capture the ‘know-who’.
  3. Keep only selected, highly-filtered knowledge in your central repositories.
  4. Don’t overlook the value of plain-old ‘data’.
  5. The bibliography may be more valuable than the document itself.
  6. Don’t wait for people to look for it, send it out, using ‘killer’ channels.
  7. Create an internal market for your offerings by giving valuable stuff away.

Gurteen Knowledge Conference and XKM

Filed under:

Tags:

23-06-2003

Matt Mower has some good summary posts ( 1 2 3 4 5 ) from the Gurteen Knowledge conference.

Also on his site Matt has a fledgling wiki dedicated to the subject of eXtreme Knowledge Management (XKM) “a lightweight KM methodology”

Towards Structured Blogging

Sbastien Paquet: Towards Structured Blogging.

And of course this post is an example of that which he describes – pinging as it does both KMPings and the Blog-Network Metablog

Blogs as stories

In Blogs and Knowledge Sharing, Ton picks up the story of why we do this by considering blogs as story-telling – more specifically a way of telling the story of how the writer has discovered some knowledge complete with all the false leads and wrong turns.

Searching the blogsphere

Filed under:

Tags:

08-03-2003

Micha Alpern says

bq. Some times I want to know what the world thinks (google)
Some times I want to know what I think (my weblog)
Some times I want to know what those I respect think (blogs I read)….

… and backs it up with code…

[ via "The Shifted Librarian":http://www.theshiftedlibrarian.com/]

More on Creative Commons, economics of IP etc

Notes to self to read and digest later…

Tim Hadley on the long-term effects of Creative Commons licences [via Ernie the Attorney]

Douglas Clement writes “Does innovation require intellectual property rights?”:http://www.reason.com/0303/fe.dc.creation.shtml reviewing this “paper(Perfectly Competitive Innovation [PDF 253kb])”:http://minneapolisfed.org/research/sr/sr303.pdf by Michele Boldrin and David K. Levine [ via "TeledyN":http://www.teledyn.com/mt/ ]

Personal Knowledge, Universal Knowledge

Filed under:

Tags:

21-02-2003

Spike Hall is thinking about Knowledge-making and distinguishes between personal knowledge and universal knowledge:

Take learning to ride a bike, for example; [...] at the end the learner can do more [...] and has, therefore, ‘made knowledge’. But it’s not knowlege-making in the universal sense. The universal sense applies when our new knowledge is also [provably, arguably] new for EVERYBODY. Claims for knowledge-making in the universal sense are addressed in the academic and scientific literature.

I suggest that this universal knowledge is a facet of what General Semanticists would call time-binding, the ability of human beings to learn from prior generations. He continues:

The pursuit of a bit of universal knowledge won’t be unlike the pursuit of personal knowledge; the quest, before it ends, will , however, be more rigorous. This would be so first because establishing its newness in a universal sense has one communicating, explaining, demonstrating to a broad audience. Those that are skeptical about its newness or it’s utility in the environment to which it will apply (remember the equilibrium between individual and environment) will have to be satisfied, through reading descriptions of the inventor’s efforts of study and research, first, and her/his communication/explanation of the artifact itself, second, that it is both useful and new.

…and goes on to wonder how weblogging relates to this

In my next entry I will reconstruct a nonweblogged knowledge-making experience and then I will speculate how the same effort might have been different with weblog mediation.

I’m going to speculate that the way weblogging could help this is as follows:

Communication
The obvious benefits of any online medium – wide visibility. The ease of update of a blog helps currency, the use of “recently updated” sites and syndication feeds helps visibility, the tendency of other bloggers to comment on things they have found also increases visibility.
Presenting antecedents and research
Again, any online medium offers (through hyperlinks) potentially easy access to prior art. The use of a frequently updated format such as a weblog has, by effectively publishing the research journal, potential to map out the process clearly and show where the new thoughts have been introduced.
Open dialogue
Through comments and trackbacks there is a visible record of discussion – challenge and response

Is knowledge work improvable?

Filed under:

Tags:

07-02-2003

Jim McGee is asking Is Knowledge work improvable?. He contrasts the “organic” approach of “knowledge-enablers” espoused by Kim Sbarcea and others with Taylorism. Sbarcea attacks the Taylorist, “command and control” approach to KM, but McGee rebuts with “it is a mistake to confound the issue of what to call knowledge management with objections to Taylorism.”
McGee goes on to link in thoughts triggered by Peter Drucker’s 1999 article “Knowledge-Worker Productivity: The Biggest Challenge.” and identifies that knowledge work is a process replete with feedback loops.

In a similar head-space, excited utterances points to Metrics for CM and KM by James Robertson

My instinct is that all of these approaches are useful, especially if you can apply them synergistically. The key, I believe, is good stakeholder analysis around the relevant knowledge-work process, asking questions such as:

  • Who are the groups affected by this process?
  • What values do they associate with the work and it’s outcomes?
  • What do they need to see from “improvements” to convince them the change is worthwhile?

For example, whilst the management who are investing in systems, tools, training etc. will want to see metrics that show a hard ROI, the practitioners and their immediate “customers” might be more concerned about how it helps them solve problems, or whether their knowledge contributions and expertise are recognised…

Open Content and Just-in-Time Books

Filed under:

Tags:

28-01-2003

Gary Lawrence Murphy is writing about Open Content on Prentice Hall – the series that Prentice Hall are bringing out under the Open Publication License.

Gary refers to his earlier experiences trying (unsuccessfully) to persuade Macmillan to adopt an Open Content approach to a project he was driving. The issues weren’t just with the publishers – authors had problems with a licence that wasn’t either 100% free or standard, and the publication process fell over in the middle – authors and printers were ready for XML-based document manipulation and output, but the editors in the middle were still using “their MsWord-based font-painters”

He ends with a vision for the future

XML-based publishing where the manuscript does not really exist, where it’s a collection of sections and variations, indexed, threaded and exported for books [...] all of it online updated on the fly by the community who uses it. The art of the author/editor would be one of filtering, pulling what they need from the knowledge base to create other titles as well, semantically linking it all through topic-maps …

Knowledge Sharing Environments

Filed under:

Tags:

23-01-2003

Lilia at Mathemagenic quotes George Siemens on the components needed for a Knowledge Sharing Environment and links to Denham Grey‘s wiki about knowledge sharing

Conflicts of interest between publishers and information creators

Filed under:

Tags:

23-01-2003

In an earlier article I floated some ideas about applying the concepts of peer production to intellectual products in the fields of NLP and Neuro-Semantics. On a related but more general note, David Gammel links to Copyright Contradictions in Scholarly Publishing by John Willinsky. If his conclusions are correct then we should expect a large take-up of Creative Commons licences by academia…
(more…)

Peer Production, the Creative Commons and NLP / NS

Filed under:

Tags: ,

18-01-2003

A long follow up to the previous article, linking ideas from Open Source and Creative Commons to development of the fields of NLP and Neuro-Semantics.

(more…)

Buckminster Fuller article

Internet Time Blog has an interesting article on Buckminster Fuller, including links to his work online, based on a talk by Bonnie DeVarco. Fuller is one of those people “we’ve all heard of”, yet I’ve never read his work. A quick taste of the links, plus Bonnie’s reported summary:

Characterizations of the man
Leonardo da Vinci of the 20th Century
Poet of Industrialization
Engineer Saint
Anti-academician
I Seem to be a Verb
I am a random element.
I am a comprehensive anticipatory design scientist.

His thinking
Micro-incisive and macro-inclusive
Nothing is static; everything is dynamic
The importance of charting trends
Being comprehensive rather than general
The importance of thinking out loud
The importance of INTUITION
Dare to be nave

Among his paradigms
Newtonian to Einsteinian universe
Wired to wireless
Ephemeralization of information
Accelerating acceleration

have convinced me I need to add to my reading list…

Blogging network and the neuro-semantics of trust

Gary Lawrence Murphy picks up the thread about Bridges and Bubbles and asks some fundamental questions about how we should evaluate the value of each link on the graph:

The bridge itself may be an accident of happenstance and bandwidth, but to grow ourselves, we’re enticed (or compelled) to test each path for inter-networked recommender bridges out from our own local space [...] Seeking Matt’s glittering cave moments, we cross over those bridges we find, and some of us become (by accident or design) new bridges for others. What’s important, the effect we want, arises not from the number of bridge paths, but by their quality, and it’s a totally subjective quality, and therefore unpredictable. Far from the networking is everything approach of Thomas Power, [...] perhaps a more efficient strategy may be a second-order goal to cultivate relationships with connected (bridging) individuals to discover what bubbles they know but also to suss out our personal metrics of the qualities of their knowledge; as with sex, quality beats quantity

Gary goes on to link this to earlier comments he wrote about the role of trust. (and that in itself is linked to a fascinating dialogue on trust that Gary has contributed to on Knowledge Board) The conversation stretches across several platforms and the interchange relevant here is between Gary and Ton Zijlstra
To summarise, Gary’s key point is

that ‘trust’ arises from a brainstate, an emotional sensation

whereas Ton says

So if we say we trust someone, this means that we recognize a consistent pattern of behaviour, and a certain level of predictability (reputation) in the other.

Gary notes (and Ton acknowledges) that most of the participants in the Knowledge Board discussion appeared to shy away from this “animal effect” to look for “higher” reasons for trust, and goes on to suggest

The more correct response is, IMHO, that while our brain colours our perceptions, humans are so blazingly successful on this planet because we can (not that we do, just that we can) transcend our physiology (when it’s appropriate!) to reach for higher conclusions.

The thing that I notice about this discussion is the Cartesian brain-vs-physiology dualism of it. IMHO looking through a systemic neuro-semantic frame will allow us to combine both insights, perhaps leading to more clarity…

Like all systems with feedback loops it’s easy to get caught into chicken-and-egg thinking if you ask which comes first – the somatic response or the meta-state thought structure about the value of a consistent pattern of perceived behaviour. It’s a truism in neuro-semantics that meta-states collapse very quickly into a neuro-physiological state. Unpicking this to explore (and maybe change) the higher level states is an important step to understand what is happening… Ton appears to have done that unpicking, and for him the feeling of trust is associated with the cognitive state of recognising consistent behaviour. Ton doesn’t mention if he actually makes his trust-based decisions on a gut feeling or whether he consciously explores the history of consistent behaviour. My guess in the absence of data is the former (but open to correction!)…

Some questions come to mind:

  • Do other people share Ton’s criteria for trust?
  • What other criteria might apply?
  • What evidence can we glean from online connections that might allow those criteria to be applied?
  • Could we create new forms of information that would help that discrimination?
  • How do we Mind-to-Muscle those mental states to give an emotional signal for “online trust” that will work as a shorthand?

Lots more to do, and I’m sure others out there are further down the path. In the meantime perhaps we are, as Gary says, “back to clicking on pure blind faith”!

More on the evolving network between blogs

Filed under:

Tags:

08-01-2003

Euan at The Obvious sums up his view of the “blogroll or not blogroll” debate as “I just like following winding paths”.

At the same time Ton Zijlstra picks up on various experiments with Social Network Analysis of the blogosphere and draws the important distinction between a map based on “who knows you” – i.e. an analysis of inbound links (and I would say by implication trackbacks and comments) and a map of “who you know” based on outbound links.

This feels time to send a request to the Lazyweb – a graphical web-based tool that takes a URL and presents some kind of graphical depiction of the incoming and outgoing networks it discovers…

Updated Matt Jones writes Bridging The Bubbles about similar ideas applied to finding the bridiging points between clusters of particular political (or other) views… amongst the sites he references is Valdis Krebs’ analysis of booklinks on Amazon Divided we stand? : Political patterns on the WWW

Updated again Just found Reputation and Conversation in Blogging and Network Topology at TIG’s Corner [via Doc Searls]

Update 3 GoogleBrowser looks like it might be part of the way there in terms of the display…. [via Ross Mayfield]

Personal Knowledge Publishing

Filed under:

Tags:

19-12-2002

Two part article (part 1, part 2) on “Personal knowledge publishing and its uses in research” by Sebastien Paquet [via Mathemagenic]

Update on linking blogs by category

Filed under:

Tags:

15-12-2002

Prompted by Ben Hammersley’s idea, Ben Trott has written the More Like This From Others script and Ben has implemented it… definitely something I shall look at for here, but after some other “behind-the-scenes” stuff I’m into at present… (major site redesign and technology change)

Categorising Blogs

Filed under:

Tags:

13-12-2002

Ben Hammersley and Azeem Azhar are debating how to create a decentralised categorisation service for blogs, to support a “More Like this” sort of thing…
(more…)

XFML

XFML map of this site
I’ve added an XFML feed to the syndication outputs of the site.
You can see a FacetMap view of this site here
MT template from Ease
Original pointer to XFML from Ben Hammersley

Links on FOAF and RDF

Been reading up on FOAF and RDF generally – here are some links for my own reference.
(more…)

Weblog metadata

Filed under:

Tags:

15-11-2002

The Weblog MetaData Initiative: Next Step: HTML [meta] Experiment.
N.Z. Bear
says:

We’ve had a great deal of useful and productive discussion in the forum, but it seems that some practical experimentation would be of use to us as well at this point. We seem to have reached a rough consensus on what data we want to track — and are getting bogged down in the many, many possible approaches of how to track it.
At Dean’s suggestion, I’ve gone ahead and taken a rather quick-and-dirty approach to our encoding problem. I’ve developed a specification which shows how to encode our general schema’s metadata using only HTML [meta] tags. Along the way, I’ve also “Dublin Core-ized” our data schema, and tried to use DC tags wherever possible and appropriate.
What we’d like to do is get several (as many as possible) volunteers to apply this specification to their own weblogs, thereby beginning to actually ‘publish’ real metadata. At the same time, we call on everyone who is codingly-inclined to begin examining approaches for grabbing, parsing, slicing, dicing and presenting back this very same metadata.
In other words, enough talk. Let’s do some hacking.
Please note that this does not imply an end to our discussions on appropriate encoding formats for the final specification. Depending on what we learn through this experiment, we may decide tags are one of many encodings we support, or perhaps decide they are unworkable entirely. But I think we’ll learn a great deal by the attempt. (And to be frank, I personally believe we’ll be best served by supporting many encoding formats, and only strictly dictating the actual data to be tracked itself. But that’s a discussion for another day).

[ via David Gammel]

K-Logging Pilot

Filed under:

Tags:

12-11-2002

Rick Klau recaps on experiences introducing Knowledge-logging into his company, with a pilot group of 12 users (out of 125 people in the company)
[via a klog apart]
(more…)

KM and learning

Filed under:

Tags:

11-11-2002

Just found Lilia Efimova’s site Mathemagenic. Interesting selection of articles on Knowledge Management and Learning. Here are a few:
Citing Styles, Baking knowledge into the work processes of high-end professionals, Why Blogging 2, Blog as a learning tool, Corporate objectives and learner-centered learning.

And one that struck a chord with me: It takes courage to blog

  • Follow Me

  • Subscribe by Email

    Enter your email address:

    Delivered by FeedBurner

  • Meta

  • Copyright

    • Unless otherwise expressly stated, all original material of whatever nature created by Julian Elve and included in this weblog and any related pages is licensed under a Creative Commons License.
    • Creative Commons License
  • Valid XHTML 1.0 Strict