Losing Information at CERN
CERN is a wonderful organisation. It involves several thousand people, many of them very creative, all working toward common goals. Although they are nominally organised into a hierarchical management structure,this does not constrain the way people will communicate, and share information, equipment and software across groups.
The actual observed working structure of the organisation is a multiply connected “web” whose interconnections evolve with time. In this environment, a new person arriving, or someone taking on a new task, is normally given a few hints as to who would be useful people to talk to. Information about what facilities exist and how to find out about them travels in the corridor gossip and occasional newsletters, and the details about what is required to be done spread in a similar way. All things considered, the result is remarkably successful, despite occasional misunderstandings and duplicated effort.
A problem, however, is the high turnover of people. When two years is a typical length of stay, information is constantly being lost. The introduction of the new people demands a fair amount of their time and that of others before they have any idea of what goes on. The technical details of past projects are sometimes lost forever, or only recovered after a detective investigation in an emergency. Often, the information has been recorded, it just cannot be found.
If a CERN experiment were a static once-only development, all the information could be written in a big book. As it is, CERN is constantly changing as new ideas are produced, as new technology becomes available, and in order to get around unforeseen technical problems. When a change is necessary, it normally affects only a small part of the organisation. A local reason arises for changing a part of the experiment or detector. At this point, one has to dig around to find out what other parts and people will be affected. Keeping a book up to date becomes impractical, and the structure of the book needs to be constantly revised.
The sort of information we are discussing answers, for example, questions like
- Where is this module used?
- Who wrote this code? Where does he work?
- What documents exist about that concept?
- Which laboratories are included in that project?
- Which systems depend on this device?
- What documents refer to this one?
The problems of information loss may be particularly acute at CERN, but in this case (as in certain others), CERN is a model in miniature of the rest of world in a few years time. CERN meets now some problems which the rest of the world will have to face soon. In 10 years, there may be many commercial solutions to the problems above, while today we need something to allow us to continue.
Linked information systems
In providing a system for manipulating this sort of information, the hope would be to allow a pool of information to develop which could grow and evolve with the organisation and the projects it describes. For this to be possible, the method of storage must not place its own restraints on the information. This is why a “web” of notes with links (like references) between them is far more useful than a fixed hierarchical system. When describing a complex system, many people resort to diagrams with circles and arrows. Circles and arrows leave one free to describe the interrelationships between things in a way that tables, for example, do not. The system we need is like a diagram of circles and arrows, where circles and arrows can stand for anything.
We can call the circles nodes, and the arrows links. Suppose each node is like a small note, summary article, or comment. I’m not over concerned here with whether it has text or graphics or both. Ideally, it represents or describes one particular person or object. Examples of nodes can be
- Software modules
- Groups of people
- Types of hardware
- Specific hardware objects
|Web 1.0||Web 2.0|
|evite||–>||upcoming.org and EVDB|
|domain name speculation||–>||search engine optimization|
|page views||–>||cost per click|
|screen scraping||–>||web services|
|content management systems||–>||wikis|
|directories (taxonomy)||–>||tagging (“folksonomy”)|
1. The Web As Platform
2. Harnessing Collective Intelligence
3. Data is the Next Intel Inside
4. End of the Software Release Cycle
5. Lightweight Programming Models
6. Software Above the Level of a Single Device
7. Rich User Experiences
Core Competencies of Web 2.0 Companies
In exploring the seven principles above, we’ve highlighted some of the principal features of Web 2.0. Each of the examples we’ve explored demonstrates one or more of those key principles, but may miss others. Let’s close, therefore, by summarizing what we believe to be the core competencies of Web 2.0 companies:
- Services, not packaged software, with cost-effective scalability
- Control over unique, hard-to-recreate data sources that get richer as more people use them
- Trusting users as co-developers
- Harnessing collective intelligence
- Leveraging the long tail through customer self-service
- Software above the level of a single device
- Lightweight user interfaces, development models, AND business models
The next time a company claims that it’s “Web 2.0,” test their features against the list above. The more points they score, the more they are worthy of the name. Remember, though, that excellence in one area may be more telling than some small steps in all seven.
McAfee : Enterprise 2.0: The Dawn of Emergent Collaboration
Enterprise 2.0 Technologies: Blank SLATES
Is there a kind of unresolved conflict in the article? Cunningham vs Lennard vs McAfee ?
On page 6 it says:
“Enterprise 2.0 Ground Rules:
As technologists build Enterprise 2.0 technologies that incor- porate the SLATES components, they seem to be following two intelligent ground rules. First, they’re making sure their offerings are easy to use. With current tools, authoring, linking and tagging all can be done with nothing more than a Web browser, a few clicks and some typing. No HTML skills are required. It seems reasonable to assume that anyone who can compose e-mail and search the Web can use all of the technologies described in this article with little or no training.
Second, the technologists of Enterprise 2.0 are trying hard not to impose on users any preconceived notions about how work should proceed or how output should be categorized or struc- tured. Instead, they’re building tools that let these aspects of knowledge work emerge.
This is a profound shift. Most current platforms, such as knowledge management systems, information portals, intranets and workflow applications, are highly structured from the start, and users have little opportunity to influence this structure. Wiki inventor Cunningham highlights an important shortcoming of this approach: “For questions like ‘What’s going on in the proj- ect?’ we could design a database. But whatever fields we put in the database would turn out to be what’s not important about what’s going on in the project. What’s important about the project is the stuff that you don’t anticipate.”10”
On page 8 it says:
“One of the most surprising aspects of Enterprise 2.0 tech- nologies is that even though they’re almost completely amor- phous and egalitarian, they appear to spread most quickly when there’s some initial structure and hierarchy. “Information anar- chy is just that,” says Lennard. “You have to give people a starting point that they can react to and modify; you can’t just give them a blank workspace and say, ‘Use this now.’ I’m confident that we’ll hit a ‘tipping point’ after which tool use will grow on its own, but we’re not quite there yet.” Blogging at DrKW, for example, has increased gradually but steadily (see “Growth of Blogging Inside DrKW,” p. 28.)”
Patterns Where 2.0 Should Replace 1.0
Patterns Where 2.0 is an Alternative to 1.0
There are also numerous accounts of how organisational culture dictates the use of technology, and these accounts seem to have stronger support than those described above. We have defined organisational culture as a pattern of basic assumptions that has worked well enough to be considered valid and, therefore, to be taught to new members as the correct way to think, feel and act. This is similar to the inertia Orlikowski (2000) describes when she notes how organisations tend to use new technology to reproduce and reinforce existing organisational behaviour. Ciborra and Lanzara (1994) use the notion of formative context to denote the set of institutional arrangements and cognitive imageries that the actor brings and routinely enacts in work. The formative context is thus a pervasive and deep-seated texture of relations that influences the organisational member’s execution of routines and constitutes a background for all her actions, although she typically remains unaware of its presence. Organisational culture can therefore be seen as one component of formative context, and as such, it affects technology utilisation.
Until recently, computers and information systems were only to be found in corporate or academic settings and access to such devices was restricted to working situations. This has changed radically. Our society has transformed into an e-society where information technology is becoming increasingly ubiquitous and where interaction with computational things is not limited to a working hours but part of everyday life. It seems plausible that this transformation and the growing use of computing devices for recreation and play have had – and will continue to have – an impact on how IT is used in work settings.
Intranet should not be understood as a homogeneous phenomenon. The open-endedness of web technology allows every organisation to shape its intranet according to their particular needs and preferences and this means that intranet implementations may span from tightly controlled and structured information systems to loosely coupled and almost chaotic environments (Bansler et al., 2000). Nevertheless, there is still a significant discrepancy between intranets and the public web and in this paper I have tried to highlight the cultural differences that I have observed between the two. It is quite evident that even though intranets may empower people to make things happen rather than to have things happening to them, web technology does not accomplish this in and of itself (Slevin, 2000). This becomes obvious when we look at how today’s intranets have been implemented and managed, and note the discrepancy between theory and practice. Many intranets are little more than electronic bulletin boards, actual use is sporadic at best, and the technology is used primarily to share static documents (cf., Lai & Mahapatra, 1998; Newell el al., 1999; Stenmark, 2003a; b). It is thus not surprising to see headlines such as “Why do intranets fail?” (Duffy, 2001).
Many information managers and corporate internal web designers advocate the use of consistent navigation and familiar look and feel across their intranets. This, however, is not an uncomplicated matter. Studies in human-computer interaction have shown that users benefit from the higher predictability that follows a consistent naming convention, but this presupposes that the terminology is well understood by the users. Unfortunately, this is not often the case on intranets. Further, consistency makes sense within a site but not necessarily across sites. Not many users expect a political news site to have the same menu items as a recreational sports site. Yet, most companies have design templates and taxonomies that are supposed to be used on the entire intranet. This may not be the best solution; in fact, such an approach may result in alienation and the users feeling lost.
The arguments outlined in this paper also have implications for practice. I have argued that there is a clash between the information management culture that exists in today’s modern industry organisations and the principles underpinning web technology. It is unrealistic to think that organisations should be willing or able to replace their mindset overnight. However, being aware of the differences in culture that do exist may help organisations understand some of the issues they are facing when managing intranets. Management should reflect upon their motives for implementing intranets and more clearly communicate what they are hoping to achieve. If the main purpose is to establish a new channel for top-down information dissemination, this may be a straight-forward process, well within the reigning information management paradigm. However, if user commitment, cross-departmental communication, and active knowledge sharing is sought for, a more laborious road lies ahead. Hopefully, this paper can be a guide along that way.