Curbing student plagiarism doesn’t have to involve purchasing expensive software from companies like iParadigms that offer to detect academically suspicious writing. A simple curriculum design precludes the entire problem without having to spend money. Simply divide the writing assignment into small sequences, having students write more frequently, in shorter bursts. Pair students together and have each student respond to the other’s latest writing. Evaluation of the ensuing dialogue now focuses on the ability for students to grasp and analyze concepts in an intellectual environment that is dynamic and modal. Not only is this a pedagogically interesting exercise in itself, as it efficiently maximizes critical thinking skills, but think about how difficult it would be for a student to plagiarize or outsource this type of an exercise to an online paper mill. The millers don’t have time fabricate verisimilitude in writing assignments that are contingent upon what another mind will be saying every other day.
Plagiarism and low-integrity writing thrives in a student-controlled soliloquoy. With zero human interaction as the typical writing assignment unfolds, the plagiarist is free to concoct a hermetically sealed system of claims and counterclaims, lending the document an air of intellectual rigor. In this game students can encrypt their plagiarism strategies with varying degrees of success, the most sophisticated attempts requiring the most sophisticated software to detect.
Yet, personally, I wouldn’t spend time blaming or punishing plagiarists when there is simply so much to reform in undergraduate programs that exploiting the writing assignment as a way to contain the frightful demand that huge numbers of students pose on a limited group of university instructors. That’s right, writing assignments don’t cause the instructor more work, they create the least amount of work possible because they position and reduce the role of instruction to evaluating final products (instead of intermittent coaching + evaluation).
So often, and sadly, academia will respond by embracing an American Idol model for evaluating student performance. In it, the student is asked to present the intellectual work that is rehearsed in isolation and away from the evaluator’s eyes. In American Idol, it is only after the performance has been rehearsed to its fullest potential that the judges will observe the final product and voice their ephemeral feedback and stamp of judgment. It is the same in academia and, for lack of a better term, I call it the “product-over-process model” of academic correspondence. This model in academic evaluation conveniently eliminates the valuable, but costly moments that could be exploited between a student and a teacher as the writing assignment undergoes its formation. Also, it is a system that begs to be gamed. If teachers are going to continue to only ask for “final” and “processed” writing, then they risk being fed everything and anything. As a consequence, the liabilities and software detection bills keep piling up. Let’s face it, undergraduate academia, with its emphasis on evaluating intellectual products over processes, is guilty of embodying and propagating a system that values singular impressions instead of socratic experiences and as a consequence of that, deserves any mess that this would put them in.
Luckily, an undergraduate program that shrewdly exploits the peer-to-peer writing exercise proposed above can continue to preserve its evaluator resources in the face of large student populations and simultaneously maintain a pedagogy that upholds sustainable thinking.
Thursday, May 28, 2009
Friday, May 8, 2009
New idea for a wiki? Object-driven history project
I trapped an idea this morning and it makes me wonder if something like it might exist somewhere in the world of collaborative knowledge production:
A written history of the world that is driven by an online, collaboratively-assembled catalogue of the historical objects and sources that have formed the histories we have read. Its the idea that if every historical claim can be traced back to artefact evidence, then maybe a new historical project can begin to rewrite a history that catalogues all historical objects housed in public/private collections first, then used to fleshed out the narrative afterwards. I’m imagining this done on a wiki, where people can simply try to obtain as many available digital photographic evidence of vases, scrolls, hand-written accounts, whatever and then organize them into a master chronology within the wiki space. There can even be geopositional links that point readers to where these objects may be located (in addition, offering them information on how to access them, who has studied them, etc.) These images could also have trackback links to certain written accounts that have relied on the evidence to fuel their historical narratives. Text in the body associates itself directly and immediately to the sources which form the outline of the project. Text is principally used to describe how these sources have been used by historians. In later versions of this project, master historical narratives could be added as a way to lend “surfability” for student audiences.
I credit the inspiration for this idea, by the way, to an excellent grad-level methods course I took with Sandra Braman in 2006, who had me read Hayden White’s “Tropics of Discourse”.
A written history of the world that is driven by an online, collaboratively-assembled catalogue of the historical objects and sources that have formed the histories we have read. Its the idea that if every historical claim can be traced back to artefact evidence, then maybe a new historical project can begin to rewrite a history that catalogues all historical objects housed in public/private collections first, then used to fleshed out the narrative afterwards. I’m imagining this done on a wiki, where people can simply try to obtain as many available digital photographic evidence of vases, scrolls, hand-written accounts, whatever and then organize them into a master chronology within the wiki space. There can even be geopositional links that point readers to where these objects may be located (in addition, offering them information on how to access them, who has studied them, etc.) These images could also have trackback links to certain written accounts that have relied on the evidence to fuel their historical narratives. Text in the body associates itself directly and immediately to the sources which form the outline of the project. Text is principally used to describe how these sources have been used by historians. In later versions of this project, master historical narratives could be added as a way to lend “surfability” for student audiences.
I credit the inspiration for this idea, by the way, to an excellent grad-level methods course I took with Sandra Braman in 2006, who had me read Hayden White’s “Tropics of Discourse”.
Saturday, January 31, 2009
Virtual coexistence using a wiki-enhanced Second Life platform
Prior to the merging of wiki-based collaboration and VR simulators, the idea that participants could design information-rich virtual worlds seemed unthinkable. Second Life has begun supporting simulators that have the capability for participants to create, share and collaboratively edit documents that play a critical role within a simulation world. For example, a virtual nation or community within Second Life may now attempt to write a constitution document that exists as a readable/shareable object in this virtual space. Or more specifically, a group of founders may co-create its constitutional framework, thus adding significant intellectual depth to any interaction within a socio-political experiment in Second Life.
In this application, participants are Israeli and Palestinian youth, custodians of a jointly-owned land that is cursed with unevenly distributed natural resources. The basic premise of the exercise is to force participants on either side to create an information-rich coexistence city that contends with persistent complications introduced through the simulation exercise.
http://virtualcoexistence.ning.com/
In this application, participants are Israeli and Palestinian youth, custodians of a jointly-owned land that is cursed with unevenly distributed natural resources. The basic premise of the exercise is to force participants on either side to create an information-rich coexistence city that contends with persistent complications introduced through the simulation exercise.
http://virtualcoexistence.ning.com/
Tuesday, January 27, 2009
A Net Reduction or a Net Expansion: on the outcome for representations of the Israeli-Palestinian conflict in Wikipedia
1.1 Introduction
We know that change in the multiplicity of knowledges is inevitable as the boundaries of epistemic production become porous enabling the interaction of culturally bound conceptual systems. This study is concerned with the extensional meaning of knowledge representations that interact in a textual sphere accommodating multiple discursive systems. The Israeli-Palestinian conflict is amenable to this analysis since concepts involved in its representation tend to resist a universal format that everyone can agree on. Yet Wikipedia lets very little get in the way of its need to forge a picture of reality, no matter how improbable the chances. Currently the disparate narratives, images and stories of the Israeli-Palestinian conflict on Wikipedia have been forced to meld, or at the very least, be "stitched together incoherently like a Frankstein monster". To date there is nothing coming close to a study on the outcome of politically volatile or conflicted knowledges that are forced into coherence as a universal representation.
One key discussion in particular keeps this paper grounded in the theories of contemporary knowledge production (hereon: "KP") online. It is a discussion of the residual effects exhibited in universalized knowledge after it is dissected, played with and restructured as a result of this special type of dialogic production process which will be explicated in the theoretical section. This study was motivated by my desire to discover new and singular effects imposed on knowledge artefacts by collaboration technologies that faciliate unprecedented types of participant configurations. This study, by focusing on Wikipedia, will interrogate what I identify to be dual forces in the future direction of knowledge. On the one hand I note the transcendentalist vision which strives for a knowledge that can be produced and consumed in a universally consistent manner. The opposite kind of thinking encourages the factoring in of bottom-up perspectives and culturally contingent knowledge categories that push to extend, color and texture the character of reality's representations.
This study focuses on Wikipedia in hopes of inspiring a series of investigations into other technologies (semantic web, Flickr, blogosphere, tagging, social bookmarking, Google) where an exciting and novel form of dialogic interaction plays a critical role in the production of knowledge. In specific, I am concerned with this theoretical polarity of knowledge production: namely, epistemological actions that usurp, reduce and constrain the meaning of concepts (dialectical epistemology) versus those actions that loosen, extend and deepen (exclectical epistemology).
A separate series could focus on other qualitative attributes of knowledge change in Wikipedia that are not confined to spatial metaphors. This study, however, shall focus on the ability for knowledge to expand or shrink in conceptual capacity as a consequence of the interactive spaces relied upon at Wikipedia. In doing so, I will examine the spaces of knowledge production that deal in mutually exclusive decisions, e.g. the politically volatile. The focus is on one article that tackles highly controversial representational constructs belonging to a Wikipedia articled titled "Zionist Terrorism". This provides a way to determine whether humans are successfully expanding the conceptual boundaries for a type of knowledge that inhabits antagonistic, winner-takes-all settings.
First, an explication on the main concepts:
1.2 The conceptual capacity of knowledge
The conceptual capacity of knowledge refers to the spaces hosting particular concepts. Capacity, in this sense, refers to the accommodation of more than one signification system operating within a particular textual space. A common clash of signifying systems would attempt to depict the same Israeli-Palestinian conflict, but both would do so in way that supports different, larger ideological propositions about that reality. A common example of this is the pro-Palestinian "human rights" discourse that competes with the Pro-Israeli "national security" discourse for explicative power. In narrating what may have happened during an event, both may be referring to the same thing, but the "facts" comprising the event will be selected and handled differently in a way that suggests a larger representational reality.
As far as the individual concept-words go, the more semantically determined a particular word is, the lower the chances are that it will be interpreted in different ways. Of course, this has everything to do with concepts and their relationship to a larger system of meaning. In spaces where it is difficult for just one coherent meaning system to semantically determine its constituent concept-parts, then one could say that the space has a higher conceptual capacity. Since the immediate and extensional meaning of a concept is ultimately determined by its relationship to other concepts co-habitating in the same text, then it is possible to disrupt a network of meaning-relations in a space that invites ideologically diverse participants to edit a text as they please. An extensional meaning is another way of saying the external shape and contour that comprises the scope of a definition or perspective. It is what demarcates what a meaning is and is not. As is the case with politically volatile knowledge, many concepts "float" around and do not have an obvious home or anchoring within one particular discursive system.1
Words or concepts with immediately contested definitions will be used in production spaces equipped with a high and low conceptual capacity. A single author polemic, for example, will handle semantically volatile concepts like "terrorism" in ways that do not allow its usage in that text to "run against the grain" of the author's main thesis in any way. The conceptual capacity of any space in a mono-authored book is low, since books, even digital ones, do not yet offer "edit buttons" that allow anyone to disrupt a linear sequence of text with contra-indicative content. In Wikipedia, the fact that a "discussion" page is built into every space for the collaborative co-construction of a knowledge article means that there is already a mechanism in place that anticipates the handling of polysemic terms. It's relatively easy access for many diverse participants to make additions, omissions and modifications to a text means that there will be multiple agendas wrestling for dominance within the space. Wikipedia spaces of production, in this sense, have the capacity needed to examine and manage a plurality of worldviews (these worldviews, that in turn, spawn discourses and other types of "sealed" symbolic systems).
Speaking beyond Wikipedia, it is understood that web 2.0 technologies are in the business of embedding knowledge objects with complex categories and therefore portend a liberating effect on semantic straightjackets. Much less discussed, however, is the possibility for a technology to dominate minority perspectives even when an online epistemic space is designed to achieve semantic-expansive effects. The same technology, in its attempt to open up the possibilities, has actually propagated and fossilized a semantic hegemony.
1.3 Semantic-expansive knowledge
Expansion, in this context, not only refers to the quantitative proliferation of data, but to the qualitative complexity and meaning-possibilities of knowledge. An expansion embraces a number unique sets of semantic or conceptual criteria promoted by multiple, often competing, knowledge communities. Expansion, within the purview of knowledge, however, has its limits. Too much information and knowledge ceases to become useful, meaningful even. “Semantic-expansive knowledge,” could be viewed as an oxymoron in that knowledge engages in the aggressive attempt at filtering information for the purposed of distilling a coherent product.
It is because of the ideological harmony of a group acting in concert that makes possible such wide divergences across epistemic communities that all try depicting the same phenomenon. Under conditions of forced dialogue that leads to the semantic-expansion of knowledge, it is possible for a crisis of coherence in the text to be unleashed which threatens to unravel or fragment the unity of a singular meaning (Bakhtin speaks of the "centrifugal forces" of language). Texts produced under monologic conditions are said to be held by a centripetal force, a Bakhtinian term denoting how all signifiers in the text are being pushed towards a central point by an underpinning ideology. Semantic expansion does not imply a necessary unraveling of the textual coherence, however, since it is possible and commonplace for an individual or a group acting in concert to expand on its own knowledge.
When antagonistic communities co-construct knowledge in a way that consistently deepens and extends the semantic possibilities, a univocal “signal” cease to be so readily discernible. What some might consider "cacophony," Mikhail Bakhtin coined "polyphony" to describe the multiplicity of different voice emanating from a single text. The text rather than function as a simple, one-way conduit of ideological thinking to its audience, has now turned inward on itself. The text has now become a space for the enactment of inter-group discursive battles in the struggle for a coherent and one-sided meaning-outcome.
Case in point, this thesis will investigate how an encyclopedic article wishing to catalogue and describe all acts of "terrorism" perpetrated by so-called "Zionist" or "Israeli" elements would generate intense friction after the attempt was made to expand the definition of Israeli acts of terrorism to include the 1972 massacre at the Sabra and Shatila refugee camps in Lebanon.2 Had the Wikipedia article only been written by "user: Guy Montag" and other users advocating positive images of Israel, there would have been no impetus to associate Israel with this historical event, even if it were to be done in a tangential manner. In subsequent discussions, they would maintain that there was not sufficient evidence directly implicating the Israeli government in this massacre, and, therefore, to even mention it, would be considered biased. There was never even supposed to be an article carved out to isolate any phenomenon that would require a title containing the words "Israel" and "acts of violence" in it.
The article originally titled "Israeli Terrorism" was, in fact, initiated by "user: Mustafaa" who was distressed by what he viewed as a disproportionate catalaguing of Palestinian acts ot "terrorism". The Israeli Terrorism article, would at least, in his mind, hold Wikipedia accountable for nurturing a consistent and thorough application of the "terrorism" concept in all its potential instantiations. After all, it had been pointed out that many potential systematic irregularities are a reflection of Wikipedia's predominantly anglo-Western base. This would belong, in Mustafaa's mind, to another effort at regulating bias by expanding definitions to accomodate a plurality of meanings, in this case, a definition that did not limit the "terrorism" nomenclature to non-state actors of political violence.3
Wikipedia keeps a careful log of all discussions used to make editorial decisions about any encyclopedia article. What can be read below is an excerpted transcript of the debate to expand the Israeli Terrorism article to incorporate the Sabra and Shatila massacre:
19:28, 14 Jun 2005 (UTC)
However, I'm not fine with the removal of Sabra and Shatila, a longstanding part of this article with support from three of four people who have voiced opinions on this page. Let's restore the status quo ante - Mustafaa
23:03 14 Jun 2005 (UTC)
Absolutely not. You have not proven that this was a direct Israeli action. - Guy Montag
23:11, 14 Jun 2005 (UTC)
I have not argued that this was a "direct" Israeli action, nor has anyone else. You have not explained why its being "direct" is relevant. - Mustafaa
23:44, 14 Jun 2005 (UTC)
Because you don't list actions of Lebanese Phalangists under something entitled Israeli terrorism. You can't hold Israel responsible for the acts of it's allies. It's just as simple as that. Everyone is responsible for their own actions. Note that the article already explains fairly well why this is generally considered and indirect Israeli action: so why are you including it? The simple question is this. Did the IDF go into the camps and do the killing? The answer is no. So it doesn't belong,. It belongs in the Lebanese Civil War article, not here. - Guy Montag
It is at this point that Mustafaa imports a quote from a supporting source:
23:16, 14 Jun 2005 (UTC)
"The Israelis surrounded the camps and sent the Phalangists into the camps to clear out PLO fighters, and provided the Phalangists with support including flares, food, and ammunition. An Israeli investigation found a number of officials (including the Defense Minister of that time, Ariel Sharon) "indirectly responsible" for not preventing the killings..." - Mustafaa
23:46, 14 Jun 2005 (UTC)
Alright, as long as it is mentioned that direct responsibility lies with the Phalangists, I can live with it. - Guy Montag
Within Guy Montag's sentiment, "I can live with it," lies the crux of globalized epistemology. In Wikipedia, knowledge expansion seems to be powered by the very heat and friction that arises from this kind of dialogic interaction where one side must learn to "live with" intellectual inputs that are foreign to the group. It is obvious that Mustafaa was "writing against the grain" of any understanding Guy Montag would rely upon.4 To Guy Montag, the inclusion of the Sabra and Shatila anecdote does not advance the cause of what he imagines to be a neutral article on "Israeli Terrorism" (an article whose existence he believes is not justified). Nevertheless, Guy Montag's adversarial function plays just as important a role as Mustafaa (who set the agenda for expansion this time) in the resulting addition of the Sabra and Shatila segment. This particular friction fomented between the Guy Montag-Mustafaa axis, in fact, has been responsible the co-construction of various representations of the Israeli-Palestinian conflict in various iterations as an expansive or reductive version of itself (depending on how the community chose to make all the pieces fit together).
What to make of these growth phases? In science, it is considered proper methodological form to derive propositions and research conclusions out of large samples of information. With encyclopedic knowledge, it is no different; it is a better use of refining actions to start with a lot of data, than to refine something that is already refined from the outset. At the very least, the chances of yielding something unexpected are greater in the former scenario.
From a semantic perspective, however, the expansion of meaning doesn't necessarily carry the same positive connotations. Jaron Lanier, in a well real article, criticized Wikipedia prose for its lack of a discernible voice which he says prevents him from accessing the "full meaning" of a text. "Reading a Wikipedia entry," he says, "is like reading the bible closely. There are faint traces of the voices of various anonymous authors and editors, though it is impossible to be sure."5 Indeed, tracing a line of thought or narrative path throughout a Wikipedia text can be a daunting task. It is in the fecundity of complex, multi-layered, and difficult-to-navigate knowledges , however, that so many people gain an opportunity to test out their talents at editing, snipping, excising, stripping and modifying their way to a reality of their choosing.
An alternative paradigm of knowledge, based in postsructuralist thought, however, repudiates Lanier's implicit yearning for stable, singular and "full meanings". In this paradigm, language is inadequate in its ability to precisely encapsulate meanings of events and phenomena (above all when experientially shared across different groups). It is in this paradigm that the singularity of knowledge is rejected since representations of reality are always situated and destined to fragment into a multiplicity of perspectives.
Had Lanier read the article on "Israeli terrorism", he may have not necessarily detected anything bizzare in the mention of the massacre at Sabra and Shatila massacres (in the latest version that qualified who was directly and indirectly involved in it). Chances are, however, that the article lacked a certain crescendo leading to the gratification of a crystal clear narrative or coherent story. What we have in this article, because of both sides exerting pressures against the other, are more facts to handle, the facts that have been often been needled through, to just barely get accepted in. If anything, what Lanier has witnessed is an act of collaborative exclectics. It is the attempt by several people to put their minds "around" an expansive reality by teasing out the fullest meanings from concepts that operate on a wide spectrum of diverse subjectivities.
1.4 Dialogic interaction and growth cycles in knowledge
Predicated on a many-to-many network of communication, social media are inherently dialogic in the sense that knowledge is co-constructed between users who, institutionally-speaking, are for the most part theoretically equals in the participation process. This study asks whether the increase in dialogic activity leads, in effect, to richer information sets that factor into the production of knowledge. In Wikipedia, dialogic interaction translates into a scenario where user A imports perspective X into the interactive space of knowledge-production while user B imports perspective Y. At a certain point in time, and depending on the interactive space that structures utterances between the two users, the event would process a richer set of information than it would have had with one less user. The process will have it, nevertheless, that knowledge cannot simply deepen in semantic complexity as users add input. There are two principal epistemological actions that explain how a group of knowledge producers come to make decisions regarding all new inputs. The first is called "agonistic reasoning", or put simply, "argumentation." In this model, competing or mutually-exclusive ideas are counterposed in the hopes of offering a comparative view, thereby giving participants in the deliberation process a better informed method of weighing the individual merits of each side. Decisions in favor of one side will come, to whatever degree, at the expense of the inferior, competing idea. This type of epistemology has been associated with the sifting and winnowing of ideas -- an act of data reduction from a surface-level view. In the realm of ideas, it is still possible, however, that the refinement of a knowledge product arising out of argumentation would actually yield semantically richer data since, in the process, the surviving knowledge has historically interacted or "conversed" with losing inputs. Thus, the semantic complexity of a refined argument hinges on the ability for the reader (or more likely, the researcher) to trace its latent genealogy. The common experience is that the average reader will not go to this extreme measure of tracing the intellectual genealogy of an article. In this situation, a line of reasoning that loses out in the agonistic reasoning process never resurfaces and, thus, the moratorium for that strand of thought is a certain one.
Dialogue, in its alternative goal, however, is an attempt at interweaving conceptually different inputs or "negotiating" meaning in way that automatically factors all the inputs into the outcome. It is an epistemic action related to constructivism that presumes a reality which does not exist outside of a human's ability to conceive it. While this paradigm is still controversial in its application within the emprical sciences, it has gained widespread influence in the qualitative study of media and representation systems. It so happens that this second connotation of "dialogue" is to be associated with the semantic-expansive effects of knowledge since any new input will, in theory, increase the number of perspectives built into any representational construct, without invalidating the competing perspectives.
1.5 Interwoven strands of individuated thought and reasoning (Habermasian win-loss model) or collective mind (irreducible conceptual blends, similiar to Benhabib's universal particularism)? (Ferrero's collective agency class)
2.1 Hypothesis
H1: What explains attributes of Wikipedia knowledge outcomes better: exclectically conceived (loose, fuzzy, indeterminate) end-product or a dialectically conceived (refined, specific, narrowed, determined) end-product?
H2: Does this knowledge resemble a Habermasian vision of a communicative ethic or a Benhabibian vision?
3.1 Theoretical Justification
4.1 Research problem/methodology section
One way to measure the degree of exclecticity vs. dialecticity is to measure how determined the textual components are. I can do a discourse analysis of certain lexemes. I can do a narrative analysis to see if the text lends itself to coherent narratives or if it does the opposite, dilute the narratives (with what i call narrative fuzziness).
We know that change in the multiplicity of knowledges is inevitable as the boundaries of epistemic production become porous enabling the interaction of culturally bound conceptual systems. This study is concerned with the extensional meaning of knowledge representations that interact in a textual sphere accommodating multiple discursive systems. The Israeli-Palestinian conflict is amenable to this analysis since concepts involved in its representation tend to resist a universal format that everyone can agree on. Yet Wikipedia lets very little get in the way of its need to forge a picture of reality, no matter how improbable the chances. Currently the disparate narratives, images and stories of the Israeli-Palestinian conflict on Wikipedia have been forced to meld, or at the very least, be "stitched together incoherently like a Frankstein monster". To date there is nothing coming close to a study on the outcome of politically volatile or conflicted knowledges that are forced into coherence as a universal representation.
One key discussion in particular keeps this paper grounded in the theories of contemporary knowledge production (hereon: "KP") online. It is a discussion of the residual effects exhibited in universalized knowledge after it is dissected, played with and restructured as a result of this special type of dialogic production process which will be explicated in the theoretical section. This study was motivated by my desire to discover new and singular effects imposed on knowledge artefacts by collaboration technologies that faciliate unprecedented types of participant configurations. This study, by focusing on Wikipedia, will interrogate what I identify to be dual forces in the future direction of knowledge. On the one hand I note the transcendentalist vision which strives for a knowledge that can be produced and consumed in a universally consistent manner. The opposite kind of thinking encourages the factoring in of bottom-up perspectives and culturally contingent knowledge categories that push to extend, color and texture the character of reality's representations.
This study focuses on Wikipedia in hopes of inspiring a series of investigations into other technologies (semantic web, Flickr, blogosphere, tagging, social bookmarking, Google) where an exciting and novel form of dialogic interaction plays a critical role in the production of knowledge. In specific, I am concerned with this theoretical polarity of knowledge production: namely, epistemological actions that usurp, reduce and constrain the meaning of concepts (dialectical epistemology) versus those actions that loosen, extend and deepen (exclectical epistemology).
A separate series could focus on other qualitative attributes of knowledge change in Wikipedia that are not confined to spatial metaphors. This study, however, shall focus on the ability for knowledge to expand or shrink in conceptual capacity as a consequence of the interactive spaces relied upon at Wikipedia. In doing so, I will examine the spaces of knowledge production that deal in mutually exclusive decisions, e.g. the politically volatile. The focus is on one article that tackles highly controversial representational constructs belonging to a Wikipedia articled titled "Zionist Terrorism". This provides a way to determine whether humans are successfully expanding the conceptual boundaries for a type of knowledge that inhabits antagonistic, winner-takes-all settings.
First, an explication on the main concepts:
1.2 The conceptual capacity of knowledge
The conceptual capacity of knowledge refers to the spaces hosting particular concepts. Capacity, in this sense, refers to the accommodation of more than one signification system operating within a particular textual space. A common clash of signifying systems would attempt to depict the same Israeli-Palestinian conflict, but both would do so in way that supports different, larger ideological propositions about that reality. A common example of this is the pro-Palestinian "human rights" discourse that competes with the Pro-Israeli "national security" discourse for explicative power. In narrating what may have happened during an event, both may be referring to the same thing, but the "facts" comprising the event will be selected and handled differently in a way that suggests a larger representational reality.
As far as the individual concept-words go, the more semantically determined a particular word is, the lower the chances are that it will be interpreted in different ways. Of course, this has everything to do with concepts and their relationship to a larger system of meaning. In spaces where it is difficult for just one coherent meaning system to semantically determine its constituent concept-parts, then one could say that the space has a higher conceptual capacity. Since the immediate and extensional meaning of a concept is ultimately determined by its relationship to other concepts co-habitating in the same text, then it is possible to disrupt a network of meaning-relations in a space that invites ideologically diverse participants to edit a text as they please. An extensional meaning is another way of saying the external shape and contour that comprises the scope of a definition or perspective. It is what demarcates what a meaning is and is not. As is the case with politically volatile knowledge, many concepts "float" around and do not have an obvious home or anchoring within one particular discursive system.1
Words or concepts with immediately contested definitions will be used in production spaces equipped with a high and low conceptual capacity. A single author polemic, for example, will handle semantically volatile concepts like "terrorism" in ways that do not allow its usage in that text to "run against the grain" of the author's main thesis in any way. The conceptual capacity of any space in a mono-authored book is low, since books, even digital ones, do not yet offer "edit buttons" that allow anyone to disrupt a linear sequence of text with contra-indicative content. In Wikipedia, the fact that a "discussion" page is built into every space for the collaborative co-construction of a knowledge article means that there is already a mechanism in place that anticipates the handling of polysemic terms. It's relatively easy access for many diverse participants to make additions, omissions and modifications to a text means that there will be multiple agendas wrestling for dominance within the space. Wikipedia spaces of production, in this sense, have the capacity needed to examine and manage a plurality of worldviews (these worldviews, that in turn, spawn discourses and other types of "sealed" symbolic systems).
Speaking beyond Wikipedia, it is understood that web 2.0 technologies are in the business of embedding knowledge objects with complex categories and therefore portend a liberating effect on semantic straightjackets. Much less discussed, however, is the possibility for a technology to dominate minority perspectives even when an online epistemic space is designed to achieve semantic-expansive effects. The same technology, in its attempt to open up the possibilities, has actually propagated and fossilized a semantic hegemony.
1.3 Semantic-expansive knowledge
Expansion, in this context, not only refers to the quantitative proliferation of data, but to the qualitative complexity and meaning-possibilities of knowledge. An expansion embraces a number unique sets of semantic or conceptual criteria promoted by multiple, often competing, knowledge communities. Expansion, within the purview of knowledge, however, has its limits. Too much information and knowledge ceases to become useful, meaningful even. “Semantic-expansive knowledge,” could be viewed as an oxymoron in that knowledge engages in the aggressive attempt at filtering information for the purposed of distilling a coherent product.
It is because of the ideological harmony of a group acting in concert that makes possible such wide divergences across epistemic communities that all try depicting the same phenomenon. Under conditions of forced dialogue that leads to the semantic-expansion of knowledge, it is possible for a crisis of coherence in the text to be unleashed which threatens to unravel or fragment the unity of a singular meaning (Bakhtin speaks of the "centrifugal forces" of language). Texts produced under monologic conditions are said to be held by a centripetal force, a Bakhtinian term denoting how all signifiers in the text are being pushed towards a central point by an underpinning ideology. Semantic expansion does not imply a necessary unraveling of the textual coherence, however, since it is possible and commonplace for an individual or a group acting in concert to expand on its own knowledge.
When antagonistic communities co-construct knowledge in a way that consistently deepens and extends the semantic possibilities, a univocal “signal” cease to be so readily discernible. What some might consider "cacophony," Mikhail Bakhtin coined "polyphony" to describe the multiplicity of different voice emanating from a single text. The text rather than function as a simple, one-way conduit of ideological thinking to its audience, has now turned inward on itself. The text has now become a space for the enactment of inter-group discursive battles in the struggle for a coherent and one-sided meaning-outcome.
Case in point, this thesis will investigate how an encyclopedic article wishing to catalogue and describe all acts of "terrorism" perpetrated by so-called "Zionist" or "Israeli" elements would generate intense friction after the attempt was made to expand the definition of Israeli acts of terrorism to include the 1972 massacre at the Sabra and Shatila refugee camps in Lebanon.2 Had the Wikipedia article only been written by "user: Guy Montag" and other users advocating positive images of Israel, there would have been no impetus to associate Israel with this historical event, even if it were to be done in a tangential manner. In subsequent discussions, they would maintain that there was not sufficient evidence directly implicating the Israeli government in this massacre, and, therefore, to even mention it, would be considered biased. There was never even supposed to be an article carved out to isolate any phenomenon that would require a title containing the words "Israel" and "acts of violence" in it.
The article originally titled "Israeli Terrorism" was, in fact, initiated by "user: Mustafaa" who was distressed by what he viewed as a disproportionate catalaguing of Palestinian acts ot "terrorism". The Israeli Terrorism article, would at least, in his mind, hold Wikipedia accountable for nurturing a consistent and thorough application of the "terrorism" concept in all its potential instantiations. After all, it had been pointed out that many potential systematic irregularities are a reflection of Wikipedia's predominantly anglo-Western base. This would belong, in Mustafaa's mind, to another effort at regulating bias by expanding definitions to accomodate a plurality of meanings, in this case, a definition that did not limit the "terrorism" nomenclature to non-state actors of political violence.3
Wikipedia keeps a careful log of all discussions used to make editorial decisions about any encyclopedia article. What can be read below is an excerpted transcript of the debate to expand the Israeli Terrorism article to incorporate the Sabra and Shatila massacre:
19:28, 14 Jun 2005 (UTC)
However, I'm not fine with the removal of Sabra and Shatila, a longstanding part of this article with support from three of four people who have voiced opinions on this page. Let's restore the status quo ante - Mustafaa
23:03 14 Jun 2005 (UTC)
Absolutely not. You have not proven that this was a direct Israeli action. - Guy Montag
23:11, 14 Jun 2005 (UTC)
I have not argued that this was a "direct" Israeli action, nor has anyone else. You have not explained why its being "direct" is relevant. - Mustafaa
23:44, 14 Jun 2005 (UTC)
Because you don't list actions of Lebanese Phalangists under something entitled Israeli terrorism. You can't hold Israel responsible for the acts of it's allies. It's just as simple as that. Everyone is responsible for their own actions. Note that the article already explains fairly well why this is generally considered and indirect Israeli action: so why are you including it? The simple question is this. Did the IDF go into the camps and do the killing? The answer is no. So it doesn't belong,. It belongs in the Lebanese Civil War article, not here. - Guy Montag
It is at this point that Mustafaa imports a quote from a supporting source:
23:16, 14 Jun 2005 (UTC)
"The Israelis surrounded the camps and sent the Phalangists into the camps to clear out PLO fighters, and provided the Phalangists with support including flares, food, and ammunition. An Israeli investigation found a number of officials (including the Defense Minister of that time, Ariel Sharon) "indirectly responsible" for not preventing the killings..." - Mustafaa
23:46, 14 Jun 2005 (UTC)
Alright, as long as it is mentioned that direct responsibility lies with the Phalangists, I can live with it. - Guy Montag
Within Guy Montag's sentiment, "I can live with it," lies the crux of globalized epistemology. In Wikipedia, knowledge expansion seems to be powered by the very heat and friction that arises from this kind of dialogic interaction where one side must learn to "live with" intellectual inputs that are foreign to the group. It is obvious that Mustafaa was "writing against the grain" of any understanding Guy Montag would rely upon.4 To Guy Montag, the inclusion of the Sabra and Shatila anecdote does not advance the cause of what he imagines to be a neutral article on "Israeli Terrorism" (an article whose existence he believes is not justified). Nevertheless, Guy Montag's adversarial function plays just as important a role as Mustafaa (who set the agenda for expansion this time) in the resulting addition of the Sabra and Shatila segment. This particular friction fomented between the Guy Montag-Mustafaa axis, in fact, has been responsible the co-construction of various representations of the Israeli-Palestinian conflict in various iterations as an expansive or reductive version of itself (depending on how the community chose to make all the pieces fit together).
What to make of these growth phases? In science, it is considered proper methodological form to derive propositions and research conclusions out of large samples of information. With encyclopedic knowledge, it is no different; it is a better use of refining actions to start with a lot of data, than to refine something that is already refined from the outset. At the very least, the chances of yielding something unexpected are greater in the former scenario.
From a semantic perspective, however, the expansion of meaning doesn't necessarily carry the same positive connotations. Jaron Lanier, in a well real article, criticized Wikipedia prose for its lack of a discernible voice which he says prevents him from accessing the "full meaning" of a text. "Reading a Wikipedia entry," he says, "is like reading the bible closely. There are faint traces of the voices of various anonymous authors and editors, though it is impossible to be sure."5 Indeed, tracing a line of thought or narrative path throughout a Wikipedia text can be a daunting task. It is in the fecundity of complex, multi-layered, and difficult-to-navigate knowledges , however, that so many people gain an opportunity to test out their talents at editing, snipping, excising, stripping and modifying their way to a reality of their choosing.
An alternative paradigm of knowledge, based in postsructuralist thought, however, repudiates Lanier's implicit yearning for stable, singular and "full meanings". In this paradigm, language is inadequate in its ability to precisely encapsulate meanings of events and phenomena (above all when experientially shared across different groups). It is in this paradigm that the singularity of knowledge is rejected since representations of reality are always situated and destined to fragment into a multiplicity of perspectives.
Had Lanier read the article on "Israeli terrorism", he may have not necessarily detected anything bizzare in the mention of the massacre at Sabra and Shatila massacres (in the latest version that qualified who was directly and indirectly involved in it). Chances are, however, that the article lacked a certain crescendo leading to the gratification of a crystal clear narrative or coherent story. What we have in this article, because of both sides exerting pressures against the other, are more facts to handle, the facts that have been often been needled through, to just barely get accepted in. If anything, what Lanier has witnessed is an act of collaborative exclectics. It is the attempt by several people to put their minds "around" an expansive reality by teasing out the fullest meanings from concepts that operate on a wide spectrum of diverse subjectivities.
1.4 Dialogic interaction and growth cycles in knowledge
Predicated on a many-to-many network of communication, social media are inherently dialogic in the sense that knowledge is co-constructed between users who, institutionally-speaking, are for the most part theoretically equals in the participation process. This study asks whether the increase in dialogic activity leads, in effect, to richer information sets that factor into the production of knowledge. In Wikipedia, dialogic interaction translates into a scenario where user A imports perspective X into the interactive space of knowledge-production while user B imports perspective Y. At a certain point in time, and depending on the interactive space that structures utterances between the two users, the event would process a richer set of information than it would have had with one less user. The process will have it, nevertheless, that knowledge cannot simply deepen in semantic complexity as users add input. There are two principal epistemological actions that explain how a group of knowledge producers come to make decisions regarding all new inputs. The first is called "agonistic reasoning", or put simply, "argumentation." In this model, competing or mutually-exclusive ideas are counterposed in the hopes of offering a comparative view, thereby giving participants in the deliberation process a better informed method of weighing the individual merits of each side. Decisions in favor of one side will come, to whatever degree, at the expense of the inferior, competing idea. This type of epistemology has been associated with the sifting and winnowing of ideas -- an act of data reduction from a surface-level view. In the realm of ideas, it is still possible, however, that the refinement of a knowledge product arising out of argumentation would actually yield semantically richer data since, in the process, the surviving knowledge has historically interacted or "conversed" with losing inputs. Thus, the semantic complexity of a refined argument hinges on the ability for the reader (or more likely, the researcher) to trace its latent genealogy. The common experience is that the average reader will not go to this extreme measure of tracing the intellectual genealogy of an article. In this situation, a line of reasoning that loses out in the agonistic reasoning process never resurfaces and, thus, the moratorium for that strand of thought is a certain one.
Dialogue, in its alternative goal, however, is an attempt at interweaving conceptually different inputs or "negotiating" meaning in way that automatically factors all the inputs into the outcome. It is an epistemic action related to constructivism that presumes a reality which does not exist outside of a human's ability to conceive it. While this paradigm is still controversial in its application within the emprical sciences, it has gained widespread influence in the qualitative study of media and representation systems. It so happens that this second connotation of "dialogue" is to be associated with the semantic-expansive effects of knowledge since any new input will, in theory, increase the number of perspectives built into any representational construct, without invalidating the competing perspectives.
1.5 Interwoven strands of individuated thought and reasoning (Habermasian win-loss model) or collective mind (irreducible conceptual blends, similiar to Benhabib's universal particularism)? (Ferrero's collective agency class)
2.1 Hypothesis
H1: What explains attributes of Wikipedia knowledge outcomes better: exclectically conceived (loose, fuzzy, indeterminate) end-product or a dialectically conceived (refined, specific, narrowed, determined) end-product?
H2: Does this knowledge resemble a Habermasian vision of a communicative ethic or a Benhabibian vision?
3.1 Theoretical Justification
4.1 Research problem/methodology section
One way to measure the degree of exclecticity vs. dialecticity is to measure how determined the textual components are. I can do a discourse analysis of certain lexemes. I can do a narrative analysis to see if the text lends itself to coherent narratives or if it does the opposite, dilute the narratives (with what i call narrative fuzziness).
Sunday, January 25, 2009
The ethics of networked thinking in higher education 1.4.0
The internet is aplomb with student peer-to-peer communication, otherwise encompassing emergent forms of chattering, collaborating, document and file sharing, to name some. It is cyber conduct that university faculty and administration are beginning to contemplate and then worry about since most of it is happening outside their sphere of control. While some have taken these realities to predict a post-university world where the student becomes the administrator of their own customized educational experience, I focus rather on questions that deal with the adaptation or reconfiguration of the university's instructional program, under the assumption that universities are in a unique position to lend their resources, traditions and values towards a dialogue with a diverse field of emerging learning behaviors and initiatives.
An ethical question, therefore, ensues: with the internet beginning to play a much larger, more practical role in learning and intellectual production, the question begs as to how "isolated" versus "socially connected" students should be as they intellectually engage in their academic projects. For example, a typical college exam is an activity designed to be experienced in isolation. University curricula, across the board, already heavily bias individualism over collaborative intellectual work in undergraduate research and writing exercises. Even the exceptions do not necessarily defy the prevailing ethic; university sanctioned study groups ultimately contend with sharply competitive grading styles that still reinforce a personal, as opposed to collective cognitive, responsibility for the advancement of learning.
Despite this, no university in practice will conform neatly to an extreme "individualist" or "networked thinking" model of learning. This analysis first exposes the underlying, conflicting pedagogies that thrive and exchange under these exciting times of rapid technological change. For this, I will present examples of how different learning systems have come to interoperate and collide with each other in the watershed 2008-2009 academic year. What follows below is a breakdown of three primary ideas which explain the tension residing between individual and "networked thinking" pedagogies.
Tensions
1. The imperative of measuring scholarly progress
2. Puritan conceptions of intellectual "laboring"
3. Notions of intellectual "ownership"
Networked Thinking's tenuous relationship with academia
Networked thinking, in its most general sense, is just one among several terms evoking the notion that people, interconnected by a platform that facilitates group forms of discussion, information sharing, deliberation, reasoning and co-constructive activities, can yield cognitive accomplishments that, in the collective, are more valuable than anything that could be accomplished through the sum of its constituent individual thinkers.
It has been researched and known with increasing conclusiveness since the 1970's that social forms of learning significantly improves students' ability to engage with and master their academic projects.[1] Learning methods that employ collective cognitive exercises have been embraced to varying extents by different universities. Almost always, these currents must be understood against the backdrop of 100 years of university instruction that supports, above all, an individualized learning and evaluative experience.[2]
This conflict of learning paradigms, which, on the one hand, forces students through a solitary obstacle course of the school's design, and on the extreme opposite end, fosters a community of independent and unfettered thinking and research, can be explained in part by the university's understanding of students' developmental needs and intellectual responsibilities throughout various stages of an academic career.
Tension #1: Measurement and Evaluation of Scholarly Progress
One of the main rationales cited for individualized learning is that such a strategy allows the academic authority figure to hold any student accountable to the "essential prerequisites" of a standardized curriculum. This student, whose performance is observed and measured, can be held back or pushed forward along a linear trajectory representative of the curriculum's learning objectives.[3] Because much of networked thinking involves distributing intellectual labor between multiple learning agents, there arises a logistical conflict when it comes time to monitor a particular student's progress since social learning clouds the boundaries of individual intellectual responsibility.
For those who cynically see the university as bending backwards to market imperatives and neoliberalism, one will find an additional explanation for the dominance of a pedagogy that curtails experimental social learning techniques. This, the critics say, because the university is doing a better job of training undergrads than educating. As the prominent internet sociologist, Clay Shirky, wryly noted in a conference talk, universities do not ask students to figure out the formula for hydrochloric acid because they need it to be discovered. Rather, he says, the university is giving students an opportunity to solve pre-fabricated problems, otherwise reflected in the term "learning by doing".[4] This intellectual sandbox pedagogy, for the most part, explains the difference between undergraduate work, which is highly programmatic and predictable and graduate level work, where students depart from such to produce ground-breaking, publishable thought.
It is no coincidence, thus, that graduate-level seminars depart from the didactive style of teaching, instead, encouraging its students to deliberate in free-flowing, social formats. The need at the graduate level for universities to compete in the marketplace of ideas, positions its students, not merely as learners, but as producers who are entrusted with higher-order cognitive tasks. Therefore, different forms of networked thinking are encouraged at the graduate level. Thus, seminars, co-authorships, colloquia, conferences, etc., are all staples of academic life beyond the bachelor's degree. Before this transition point, students are symbolically bereft of trust in their cognitive skills. If this fact is not being reflected by the fashionable moves towards pedagogies of social learning that are touted mainly within circles of educational theory, it is because in practice, the overwhelming residue of individualized learning theories lay manifest in the academic policies and syllabi of almost every American university.[5]
Tension #2: Puritan conceptions of intellectual laboring
If the first tension with networked learning relates to the practical matter of instructors needing to evaluate for student progress, then this second tension can explain a deeper, pedagogical contention that some educators may hold against collective learning practices. This same educator, incidentally, may favor learning modules that are centrally directed and supported through individualized school work. Bereiter and Scardamalia refer to this as the "Teacher A" model.[6]
to be continued...
Works cited
[1] Light, Richard J. 2004. Making the Most of College: Students Speak Their Minds. Harvard University Press, May 30.
[2] John Seely Brown, and Richard P. Adler. 2008. Minds on Fire: Open Education, the Long Tail and Learning 2.0. EDUCAUSE Review 43, no. 1 (February): 16-32.
[3] Weisgerber, Robert A. 1971. Perspectives in Individualized Learning. F. E. Peacock Publishers, Inc.: p. 13
[4] Shirky, Clay. 2008. It's not information overload. It's information failure. presented at the Web 2.0 Expo, September 9, Javits Center. http://www.krisjordan.com/2008/09/18/clay-shirky-keynote/.
[5] The excerpt below serves as an example of the individualist attitude resonating throughout policy and instructional documentation in American universities. From the Committee on Academic Conduct in the College of Arts and Sciences. 2007. Academic Honesty: Cheating and Plagiarism. University of Washington, September 4. http://depts.washington.edu/grading/issue1/honesty.htm.
"Typically, students will create a detailed outline together, then write separate papers from the outline. The final papers may have different wording but share structure and important ideas. This is cheating because the students have failed to hand in something that is substantially their own work..."
[6]Carl Bereiter and Marlene Scardamalia, “An Attainable Version of High Literacy: Approaches to Teaching Higher-Order Skills in Reading and Writing,” Curriculum Inquiry 17, no. 1: 19-30.
An ethical question, therefore, ensues: with the internet beginning to play a much larger, more practical role in learning and intellectual production, the question begs as to how "isolated" versus "socially connected" students should be as they intellectually engage in their academic projects. For example, a typical college exam is an activity designed to be experienced in isolation. University curricula, across the board, already heavily bias individualism over collaborative intellectual work in undergraduate research and writing exercises. Even the exceptions do not necessarily defy the prevailing ethic; university sanctioned study groups ultimately contend with sharply competitive grading styles that still reinforce a personal, as opposed to collective cognitive, responsibility for the advancement of learning.
Despite this, no university in practice will conform neatly to an extreme "individualist" or "networked thinking" model of learning. This analysis first exposes the underlying, conflicting pedagogies that thrive and exchange under these exciting times of rapid technological change. For this, I will present examples of how different learning systems have come to interoperate and collide with each other in the watershed 2008-2009 academic year. What follows below is a breakdown of three primary ideas which explain the tension residing between individual and "networked thinking" pedagogies.
Tensions
1. The imperative of measuring scholarly progress
2. Puritan conceptions of intellectual "laboring"
3. Notions of intellectual "ownership"
Networked Thinking's tenuous relationship with academia
Networked thinking, in its most general sense, is just one among several terms evoking the notion that people, interconnected by a platform that facilitates group forms of discussion, information sharing, deliberation, reasoning and co-constructive activities, can yield cognitive accomplishments that, in the collective, are more valuable than anything that could be accomplished through the sum of its constituent individual thinkers.
It has been researched and known with increasing conclusiveness since the 1970's that social forms of learning significantly improves students' ability to engage with and master their academic projects.[1] Learning methods that employ collective cognitive exercises have been embraced to varying extents by different universities. Almost always, these currents must be understood against the backdrop of 100 years of university instruction that supports, above all, an individualized learning and evaluative experience.[2]
This conflict of learning paradigms, which, on the one hand, forces students through a solitary obstacle course of the school's design, and on the extreme opposite end, fosters a community of independent and unfettered thinking and research, can be explained in part by the university's understanding of students' developmental needs and intellectual responsibilities throughout various stages of an academic career.
Tension #1: Measurement and Evaluation of Scholarly Progress
One of the main rationales cited for individualized learning is that such a strategy allows the academic authority figure to hold any student accountable to the "essential prerequisites" of a standardized curriculum. This student, whose performance is observed and measured, can be held back or pushed forward along a linear trajectory representative of the curriculum's learning objectives.[3] Because much of networked thinking involves distributing intellectual labor between multiple learning agents, there arises a logistical conflict when it comes time to monitor a particular student's progress since social learning clouds the boundaries of individual intellectual responsibility.
For those who cynically see the university as bending backwards to market imperatives and neoliberalism, one will find an additional explanation for the dominance of a pedagogy that curtails experimental social learning techniques. This, the critics say, because the university is doing a better job of training undergrads than educating. As the prominent internet sociologist, Clay Shirky, wryly noted in a conference talk, universities do not ask students to figure out the formula for hydrochloric acid because they need it to be discovered. Rather, he says, the university is giving students an opportunity to solve pre-fabricated problems, otherwise reflected in the term "learning by doing".[4] This intellectual sandbox pedagogy, for the most part, explains the difference between undergraduate work, which is highly programmatic and predictable and graduate level work, where students depart from such to produce ground-breaking, publishable thought.
It is no coincidence, thus, that graduate-level seminars depart from the didactive style of teaching, instead, encouraging its students to deliberate in free-flowing, social formats. The need at the graduate level for universities to compete in the marketplace of ideas, positions its students, not merely as learners, but as producers who are entrusted with higher-order cognitive tasks. Therefore, different forms of networked thinking are encouraged at the graduate level. Thus, seminars, co-authorships, colloquia, conferences, etc., are all staples of academic life beyond the bachelor's degree. Before this transition point, students are symbolically bereft of trust in their cognitive skills. If this fact is not being reflected by the fashionable moves towards pedagogies of social learning that are touted mainly within circles of educational theory, it is because in practice, the overwhelming residue of individualized learning theories lay manifest in the academic policies and syllabi of almost every American university.[5]
Tension #2: Puritan conceptions of intellectual laboring
If the first tension with networked learning relates to the practical matter of instructors needing to evaluate for student progress, then this second tension can explain a deeper, pedagogical contention that some educators may hold against collective learning practices. This same educator, incidentally, may favor learning modules that are centrally directed and supported through individualized school work. Bereiter and Scardamalia refer to this as the "Teacher A" model.[6]
In school, the greatest premium is placed upon "pure thought" activities--what individuals can do without the external support of books and notes, calculators, or other complex instruments. Although use of these tools may sometimes be permitted during school learning, they are almost always absent during testing and examination. At least implicitly then, school is an institution that values
thought that proceeds independently, without aid of physical and cognitive tools. In contrast, most mental activities outside school are engaged intimately with tools, and the resultant cognitive activity is shaped by and dependent upon the kinds of tools available.
to be continued...
Works cited
[1] Light, Richard J. 2004. Making the Most of College: Students Speak Their Minds. Harvard University Press, May 30.
[2] John Seely Brown, and Richard P. Adler. 2008. Minds on Fire: Open Education, the Long Tail and Learning 2.0. EDUCAUSE Review 43, no. 1 (February): 16-32.
[3] Weisgerber, Robert A. 1971. Perspectives in Individualized Learning. F. E. Peacock Publishers, Inc.: p. 13
[4] Shirky, Clay. 2008. It's not information overload. It's information failure. presented at the Web 2.0 Expo, September 9, Javits Center. http://www.krisjordan.com/2008/09/18/clay-shirky-keynote/.
[5] The excerpt below serves as an example of the individualist attitude resonating throughout policy and instructional documentation in American universities. From the Committee on Academic Conduct in the College of Arts and Sciences. 2007. Academic Honesty: Cheating and Plagiarism. University of Washington, September 4. http://depts.washington.edu/grading/issue1/honesty.htm.
"Typically, students will create a detailed outline together, then write separate papers from the outline. The final papers may have different wording but share structure and important ideas. This is cheating because the students have failed to hand in something that is substantially their own work..."
[6]Carl Bereiter and Marlene Scardamalia, “An Attainable Version of High Literacy: Approaches to Teaching Higher-Order Skills in Reading and Writing,” Curriculum Inquiry 17, no. 1: 19-30.
Thursday, March 27, 2008
Beyond knowledge production: Wikipedia as cognosphere
(disclaimer: this blog posting is intended to be read similarly to a wiki article; it is a work-in-progress. I must turn this blog post into Master's thesis that I can defend by August 2008)
As with most literature on web 2.0 technology, words like “open source", "crowdsourcing" and “mass collaboration” are conceptually committed to industry and organizational management needs as well as product improvements. In a web 2.0 world that works right, communities of software engineers pool thought in order to debug problems and accelerate the pace of innovation; likewise, information workers in any venue can reap rewards from the way that these technologies effectively facilitate the distribution of labor. It is because web 2.0 technologies are seen as superior organizational tools, authors and critics tend to understand their function and impact through the narrow lens of producerism. Thus, you will find that these tools are evaluated for their ability to somehow improve the accuracy, profitability or usefulness of some research project (i.e. product refinement). It is no surprise, then, that an emphasis in the arena of Wikipedia research concentrates on questions of information credibility and usability.1
In a different paradigm of inquiry, however, these same technologies exist as something more than the internet's version of the product assembly line. The overlooked process is one where internet tools work like a bridge allowing different parts of the world to see and speak to each other. Wikipedia is to globalization what the corpus callosum is between the left and right brain hemispheres, a constant coordinator of disparate information that will, in turn, be sent out for higher-level processing.
Synthesis of separated elements, especially knowledge synthesis, in a collaboration age, is the production motif relevant to today's online projects. This should be contrasted to the closed-door intellectual exchanges between the information working elites of yesteryear. Such a comparison is done to examine the effects of dialogic thinking within online epistemic communities. With web 2.0 technologies the argument is that an online space will at least ground its production process in a wider, more pluralistic human perspective.
[expand on the idea of knowledge production 2.0 here. key words: refinement. symbiosis. dialectics. exlectics]
Writing Dump
Digitizing movements has become the first step necessary in the massive hauling of textual artefacts left by our predecessors to a new theatre. Accelerating the movement is Google as it continues to physically capture untold terabytes of printed knowledge that existed prior to the birth of its digital empire. A sea of old, dusty books await a new existence as potential nodes of a larger network. Metadata such as tags, RSS feeds, comments, and social bookmarks offer the potential to breathe life into texts that had no way of circulating and attracting the same type of attention in the physical world. While it is still unclear how exactly raw printed materials will be digested by mass internet communities, one thing is clear: fresh online output now grows from an informational base that is deeper and richer than ever before.
For Wikipedia, the exemplar for this new paradigm of content production, the job from the outset was to import a pre-constructed universe of knowledge, piecemeal fashion, into its collaborative encyclopedic format. Out of all the web 2.0 movements, it is currently the only one significantly supplanting, interrupting, or competing head-to-head with texts (books, reports, magazines, etc.) that are in the same business of distilling meaningful information about our known world. A less invasive version of Wikipedia might have limited the masses to commenting on the margins of authoritative texts. Instead, Wikipedia gave internet audiences prime textual real estate. This "encyclopedia" of our newly connected world also became the newest technology granting the public unprecedented managerial powers, instantly demoting authoritative sources to a function that was at most supportive or marginal.
Along with this newfound power, Wikipedians now assumed the burden of culling through a disorderly and contradictory repository of pre-existing texts through which they would collaboratively assemble realities of all kinds. One resulting consequence of collaborative production has been the explosion of knowledge articles covering everything from the mundane to the absurd. With Wikipedia viewing itself as a project to become the world's largest and best encyclopedia, activity and the deliberations backing it, appear purposeful and consensual, even with the full understanding that editors are often weighing their ideas against each other as is done in the scientific process (agonistic reasoning).
, much the way that NASA's search for extra-terrestrials included having users at home share some of their computer's processing power to crunch numbers.
Residing in the most socially sensitive areas in Wikipedia does one find
... could be summarized to the point of discernibility.
Writing Dump (please disregard everything below)
Observation of change in the shape and form of knowledge, supplemented by a history log which tracks every change made by anybody to an article, clue us into the key structures of knowledge that are vulnerable to re-negotiation. This thesis understands Wikipedia through a double lens which sees content being assembled as it is also torn down and re-structured.
This thesis sets out to observe areas of Wikipedia where distinct and novel textual outputs arise out of a dialogic interaction. By moving beyond the mainstream analytic framework of Wikipedia products, I no longer concern myself with evaluation criteria such as degrees of “good” or “bad”. With subtler evaluation criteria, at least an accounting can be made for the various textual shapes and colorings that result from diverse Wikipedians converging into globally central spaces of interaction. This would be to look into the internal composition and content of the “product” itself (the encyclopedic article), bearing in mind that the empty spaces where various editors imput thought is akin to a modern information-scape that is fluid and ever-changing. Behind every article is a collaborative work space, called the “talk page,” where, in correspondence with the editable article page itself, different systems of representation cohabitate, coalesce, blend or antagonize.
A Wikipedia article could be closed, “certified”, rendered into usable product. But that would just put a moratorium on underlying, dynamic processes that could play on indefinitely as more and more participants gain access to the Internet. These are the lesser understood consequences of a globally far-reaching dialogic interaction. It is a space that combines a multitude of culturally-sealed ideologies, discourses, grammars and concepts -- forcing encounters and collisions of an unprecedented scale.
This research arises out of a gap in the basic understanding of the Wikipedia's knowledge production process. A slew of questions don't wait for an explication of the process itself: Is Wikipedia more than just an encyclopedia? If so, what is it? How is global “knowledge” changing because of Wikipedia? How is human understanding being affected by dialogic interactions in Wikipedia? And finally, if we are to look at products, have we properly understood why and how usable texts are borne/evolve in this space? By focusing on socially controversial encyclopedic topics, I focus on areas of Wikipedia where diverse global inputs are most likely to compete and exchange. By doing so, perhaps a better measure can be used to understand how Wikipedia fares as a tool that meets the intellectual needs of the 21st century. Before embarking on an explication of knowledge production in Wikipedia, first, I offer a historical sketch of the pragmatic function encyclopedias served for the cognitive needs of societies that increasingly came into contact with each other.
The history of cognitive spheres and knowledge games
The encyclopedia has always done much more than simply serve as society's source for reference information. Ulterior agendas returned with a vengeance at the time of the Enlightenment, where elite men of letters strove to end the Church's stranglehold over truth in profound ways-- men such as Denis Diderot and Jean le Rond d'Alembert who drastically restructured the shape, and therefore rhetoric, of the knowledge body. For one, they alphabetized all articles from A-Z, in a nod to the spirit of empirical rationalism. By ordering arbitrarily along the alphabet, the encyclopedists discarded with a metaphysical ordering of the universe.
As it usually went, knowledge systems were owned, controlled and operated by those in power. Oftentimes the most critical periods of transition between one major social or ideological system to the next related to how well certain ideological groups were able to interfere with a reigning knowledge system from within.
A method of a system's own survival entails the placement of gatekeepers that “certify” knowledge. Dictionaries and encyclopedias were one of these textual artefacts that helped to build a reality of record, preserving legitimized knowledge by pruning all linguistic and conceptual change arising from grassroots or extra-national forces. By controlling the epistemological means of production, elites would have liked to increase hegemonic powers, orienting and steering action, thought and behavior in the social realm.
With almost every encyclopedia project, one can find an individual or community with a system of thinking to promote, one that offered a way for societies to brand a set of abstract concepts and relations. The underlying power inherent in the task of mapping social realities is too great to ignore, and while many encyclopedists fit the historical label of the sincerely "curious" intellectual quite well, it is another thing all together to dismiss the powers associated information condensation, systematic excision, context stripping, and fixing dynamic phenomena into codified, digestible tracts. To systematize knowledge meant then what it means to today: the ability to superimpose a privileged conceptual map over what is a much denser and dynamic field of cognizable possibilities.
Encyclopedias are fitting to study as culturally-sealed systems of top-down thinking since the mode of knowledge production has always been historically centralized and exclusivist, its efforts usually attributed to elite textual communities or kings with political and ideological agendas. Encyclopedias could, at the very least, be seen as perfectly emblematic of a particular culture's official understanding of reality.
It is at this point that I would like to offer the metaphor of a gel capsule to explain the mobility and interaction of disparate ideas in the age of printed knowledge. Medicine in this capsular form is comprised of an admixture of pharmacologically-active granular agents held together by a gel encasing. The casing ensures that the pharmacological contents are delivered to its source with no chance of cross-contamination. In the context of the history of mass communications, certain technologies did to ideas and knowledge what gel capsules have done to pharmaceutical ingredients: ensuring the controlled diffusion of pre-formulated content.
Encyclopedias, in the context of this metaphor, are the ultimate gel capsule, packing a full admixture within the sturdiest gel encasing. Books, pahmphlets, handbills, plays, and social spaces of deliberation and gossip, of course, function similarly in that they encase processed content that is eventually diffused. The more hands are able to meddle into the "admixture" -- meaning, the degree of access individuals have in determining the outcome of the knowledge -- the less that particular medium resembles the traits of a gel capsule. In this sense books are efficient "capsules" since the the content within a book is assumed to be sufficiently settled so as to justify its closing page and hardcovers. It is the same with most print materials, which is tantamount, for economic and customary reasons, to closure of the case. The fact that a book's case can be reopened when two or more people gather at a coffee shop to weigh its ideas, proves that its "capsularity" is not absolute.
Encyclopedias are more insidious than print literature because they dealt with the business of first assumptions and primary concepts that are already naturalized in language and discourse thus making them more difficult to excavate for the purposes of critical inquiry.
This is to be contrasted with, say, the Tree of Cracow, the famous chestnut tree where numerous Parisians went to circulate gossip and news related to the tumultuous events leading to the French Revolution. This type of culture of oral communications could diffuse information as well, even if its effect was that singular knowledges, say emanating from the king or the pope, would then refract into a whirlwind of hearsay. At the same time, an actual space that allowed for deliberation was a space that facilitated listening and dialogue, allowed for the interpretation and re-processing of disparate information. Just imagine the people at the Tree of Cracow, opening and tampering with the gel capsule only to spill all its contents on the ground. The tree of Cracow did not operation in isolation, however. An explosion of books, pamphlets and newspapers in the last months of 1789 supplemented the grassroots rumblings, each medium doing its own job in propping up the spaces of thought and deliberation that would in turn capitulate the Old Regime and the Church.
[transition needed here]
It is also a given that a multiplicity of textual knowledge systems had to co-exist or compete with each other. In the case of Europe, it is clear that for the most part, knowledge flowed freely between capitals and countries, usually travelling along trade routes. Universal methodologies (via epistemological standards of the day) had it so that any philosophe from Madrid to Moscow could travel to Paris to collaborate in the processing of information into knowledge. The difference between knowledge systems, in Europe at least, could be less attributed to geographic difference than they could be to differences in school of thought: Humanists would compete with Scholastics and Enlightenment thinkers with the clergy. The intense hierarchy needed to achieve universal (european) knowledge implied that its processing would be by nature borderless (among what could be imagined as communities of reasonable men across Europe).
Yet cultural boundedness reveals itself better in instances of comparing knowledge systems separated by extreme geographic distance. European capitals functioned as epistemic centers that imported concepts brought in by voyagers to the far east, assimilating and adapting new concepts so that they could be absorbed into the larger knowledge body.
With particular reference tools being the semantic bedrock of particular localities, the globe, up until the age of the Internet, played host to a constellation of encyclopedias, each one projecting its own concept map of reality; each one colored uniquely enough to exhibit obvious disparities in the way nations chose to internally structure and semantically delimit representations of reality. In brief, the rise of national discourses made artificially intact and complete brought about the opportunity for conflicting encounters between massive meaning-systems, with cosmopolitan and border cities serving as the most likely agora or field where intellectual, discursive and linguistic currents would cross each other.
As with most literature on web 2.0 technology, words like “open source", "crowdsourcing" and “mass collaboration” are conceptually committed to industry and organizational management needs as well as product improvements. In a web 2.0 world that works right, communities of software engineers pool thought in order to debug problems and accelerate the pace of innovation; likewise, information workers in any venue can reap rewards from the way that these technologies effectively facilitate the distribution of labor. It is because web 2.0 technologies are seen as superior organizational tools, authors and critics tend to understand their function and impact through the narrow lens of producerism. Thus, you will find that these tools are evaluated for their ability to somehow improve the accuracy, profitability or usefulness of some research project (i.e. product refinement). It is no surprise, then, that an emphasis in the arena of Wikipedia research concentrates on questions of information credibility and usability.1
In a different paradigm of inquiry, however, these same technologies exist as something more than the internet's version of the product assembly line. The overlooked process is one where internet tools work like a bridge allowing different parts of the world to see and speak to each other. Wikipedia is to globalization what the corpus callosum is between the left and right brain hemispheres, a constant coordinator of disparate information that will, in turn, be sent out for higher-level processing.
Synthesis of separated elements, especially knowledge synthesis, in a collaboration age, is the production motif relevant to today's online projects. This should be contrasted to the closed-door intellectual exchanges between the information working elites of yesteryear. Such a comparison is done to examine the effects of dialogic thinking within online epistemic communities. With web 2.0 technologies the argument is that an online space will at least ground its production process in a wider, more pluralistic human perspective.
[expand on the idea of knowledge production 2.0 here. key words: refinement. symbiosis. dialectics. exlectics]
Writing Dump
Digitizing movements has become the first step necessary in the massive hauling of textual artefacts left by our predecessors to a new theatre. Accelerating the movement is Google as it continues to physically capture untold terabytes of printed knowledge that existed prior to the birth of its digital empire. A sea of old, dusty books await a new existence as potential nodes of a larger network. Metadata such as tags, RSS feeds, comments, and social bookmarks offer the potential to breathe life into texts that had no way of circulating and attracting the same type of attention in the physical world. While it is still unclear how exactly raw printed materials will be digested by mass internet communities, one thing is clear: fresh online output now grows from an informational base that is deeper and richer than ever before.
For Wikipedia, the exemplar for this new paradigm of content production, the job from the outset was to import a pre-constructed universe of knowledge, piecemeal fashion, into its collaborative encyclopedic format. Out of all the web 2.0 movements, it is currently the only one significantly supplanting, interrupting, or competing head-to-head with texts (books, reports, magazines, etc.) that are in the same business of distilling meaningful information about our known world. A less invasive version of Wikipedia might have limited the masses to commenting on the margins of authoritative texts. Instead, Wikipedia gave internet audiences prime textual real estate. This "encyclopedia" of our newly connected world also became the newest technology granting the public unprecedented managerial powers, instantly demoting authoritative sources to a function that was at most supportive or marginal.
Along with this newfound power, Wikipedians now assumed the burden of culling through a disorderly and contradictory repository of pre-existing texts through which they would collaboratively assemble realities of all kinds. One resulting consequence of collaborative production has been the explosion of knowledge articles covering everything from the mundane to the absurd. With Wikipedia viewing itself as a project to become the world's largest and best encyclopedia, activity and the deliberations backing it, appear purposeful and consensual, even with the full understanding that editors are often weighing their ideas against each other as is done in the scientific process (agonistic reasoning).
, much the way that NASA's search for extra-terrestrials included having users at home share some of their computer's processing power to crunch numbers.
Residing in the most socially sensitive areas in Wikipedia does one find
... could be summarized to the point of discernibility.
Writing Dump (please disregard everything below)
Observation of change in the shape and form of knowledge, supplemented by a history log which tracks every change made by anybody to an article, clue us into the key structures of knowledge that are vulnerable to re-negotiation. This thesis understands Wikipedia through a double lens which sees content being assembled as it is also torn down and re-structured.
This thesis sets out to observe areas of Wikipedia where distinct and novel textual outputs arise out of a dialogic interaction. By moving beyond the mainstream analytic framework of Wikipedia products, I no longer concern myself with evaluation criteria such as degrees of “good” or “bad”. With subtler evaluation criteria, at least an accounting can be made for the various textual shapes and colorings that result from diverse Wikipedians converging into globally central spaces of interaction. This would be to look into the internal composition and content of the “product” itself (the encyclopedic article), bearing in mind that the empty spaces where various editors imput thought is akin to a modern information-scape that is fluid and ever-changing. Behind every article is a collaborative work space, called the “talk page,” where, in correspondence with the editable article page itself, different systems of representation cohabitate, coalesce, blend or antagonize.
A Wikipedia article could be closed, “certified”, rendered into usable product. But that would just put a moratorium on underlying, dynamic processes that could play on indefinitely as more and more participants gain access to the Internet. These are the lesser understood consequences of a globally far-reaching dialogic interaction. It is a space that combines a multitude of culturally-sealed ideologies, discourses, grammars and concepts -- forcing encounters and collisions of an unprecedented scale.
This research arises out of a gap in the basic understanding of the Wikipedia's knowledge production process. A slew of questions don't wait for an explication of the process itself: Is Wikipedia more than just an encyclopedia? If so, what is it? How is global “knowledge” changing because of Wikipedia? How is human understanding being affected by dialogic interactions in Wikipedia? And finally, if we are to look at products, have we properly understood why and how usable texts are borne/evolve in this space? By focusing on socially controversial encyclopedic topics, I focus on areas of Wikipedia where diverse global inputs are most likely to compete and exchange. By doing so, perhaps a better measure can be used to understand how Wikipedia fares as a tool that meets the intellectual needs of the 21st century. Before embarking on an explication of knowledge production in Wikipedia, first, I offer a historical sketch of the pragmatic function encyclopedias served for the cognitive needs of societies that increasingly came into contact with each other.
The history of cognitive spheres and knowledge games
The encyclopedia has always done much more than simply serve as society's source for reference information. Ulterior agendas returned with a vengeance at the time of the Enlightenment, where elite men of letters strove to end the Church's stranglehold over truth in profound ways-- men such as Denis Diderot and Jean le Rond d'Alembert who drastically restructured the shape, and therefore rhetoric, of the knowledge body. For one, they alphabetized all articles from A-Z, in a nod to the spirit of empirical rationalism. By ordering arbitrarily along the alphabet, the encyclopedists discarded with a metaphysical ordering of the universe.
As it usually went, knowledge systems were owned, controlled and operated by those in power. Oftentimes the most critical periods of transition between one major social or ideological system to the next related to how well certain ideological groups were able to interfere with a reigning knowledge system from within.
A method of a system's own survival entails the placement of gatekeepers that “certify” knowledge. Dictionaries and encyclopedias were one of these textual artefacts that helped to build a reality of record, preserving legitimized knowledge by pruning all linguistic and conceptual change arising from grassroots or extra-national forces. By controlling the epistemological means of production, elites would have liked to increase hegemonic powers, orienting and steering action, thought and behavior in the social realm.
With almost every encyclopedia project, one can find an individual or community with a system of thinking to promote, one that offered a way for societies to brand a set of abstract concepts and relations. The underlying power inherent in the task of mapping social realities is too great to ignore, and while many encyclopedists fit the historical label of the sincerely "curious" intellectual quite well, it is another thing all together to dismiss the powers associated information condensation, systematic excision, context stripping, and fixing dynamic phenomena into codified, digestible tracts. To systematize knowledge meant then what it means to today: the ability to superimpose a privileged conceptual map over what is a much denser and dynamic field of cognizable possibilities.
Encyclopedias are fitting to study as culturally-sealed systems of top-down thinking since the mode of knowledge production has always been historically centralized and exclusivist, its efforts usually attributed to elite textual communities or kings with political and ideological agendas. Encyclopedias could, at the very least, be seen as perfectly emblematic of a particular culture's official understanding of reality.
It is at this point that I would like to offer the metaphor of a gel capsule to explain the mobility and interaction of disparate ideas in the age of printed knowledge. Medicine in this capsular form is comprised of an admixture of pharmacologically-active granular agents held together by a gel encasing. The casing ensures that the pharmacological contents are delivered to its source with no chance of cross-contamination. In the context of the history of mass communications, certain technologies did to ideas and knowledge what gel capsules have done to pharmaceutical ingredients: ensuring the controlled diffusion of pre-formulated content.
Encyclopedias, in the context of this metaphor, are the ultimate gel capsule, packing a full admixture within the sturdiest gel encasing. Books, pahmphlets, handbills, plays, and social spaces of deliberation and gossip, of course, function similarly in that they encase processed content that is eventually diffused. The more hands are able to meddle into the "admixture" -- meaning, the degree of access individuals have in determining the outcome of the knowledge -- the less that particular medium resembles the traits of a gel capsule. In this sense books are efficient "capsules" since the the content within a book is assumed to be sufficiently settled so as to justify its closing page and hardcovers. It is the same with most print materials, which is tantamount, for economic and customary reasons, to closure of the case. The fact that a book's case can be reopened when two or more people gather at a coffee shop to weigh its ideas, proves that its "capsularity" is not absolute.
Encyclopedias are more insidious than print literature because they dealt with the business of first assumptions and primary concepts that are already naturalized in language and discourse thus making them more difficult to excavate for the purposes of critical inquiry.
This is to be contrasted with, say, the Tree of Cracow, the famous chestnut tree where numerous Parisians went to circulate gossip and news related to the tumultuous events leading to the French Revolution. This type of culture of oral communications could diffuse information as well, even if its effect was that singular knowledges, say emanating from the king or the pope, would then refract into a whirlwind of hearsay. At the same time, an actual space that allowed for deliberation was a space that facilitated listening and dialogue, allowed for the interpretation and re-processing of disparate information. Just imagine the people at the Tree of Cracow, opening and tampering with the gel capsule only to spill all its contents on the ground. The tree of Cracow did not operation in isolation, however. An explosion of books, pamphlets and newspapers in the last months of 1789 supplemented the grassroots rumblings, each medium doing its own job in propping up the spaces of thought and deliberation that would in turn capitulate the Old Regime and the Church.
[transition needed here]
It is also a given that a multiplicity of textual knowledge systems had to co-exist or compete with each other. In the case of Europe, it is clear that for the most part, knowledge flowed freely between capitals and countries, usually travelling along trade routes. Universal methodologies (via epistemological standards of the day) had it so that any philosophe from Madrid to Moscow could travel to Paris to collaborate in the processing of information into knowledge. The difference between knowledge systems, in Europe at least, could be less attributed to geographic difference than they could be to differences in school of thought: Humanists would compete with Scholastics and Enlightenment thinkers with the clergy. The intense hierarchy needed to achieve universal (european) knowledge implied that its processing would be by nature borderless (among what could be imagined as communities of reasonable men across Europe).
Yet cultural boundedness reveals itself better in instances of comparing knowledge systems separated by extreme geographic distance. European capitals functioned as epistemic centers that imported concepts brought in by voyagers to the far east, assimilating and adapting new concepts so that they could be absorbed into the larger knowledge body.
With particular reference tools being the semantic bedrock of particular localities, the globe, up until the age of the Internet, played host to a constellation of encyclopedias, each one projecting its own concept map of reality; each one colored uniquely enough to exhibit obvious disparities in the way nations chose to internally structure and semantically delimit representations of reality. In brief, the rise of national discourses made artificially intact and complete brought about the opportunity for conflicting encounters between massive meaning-systems, with cosmopolitan and border cities serving as the most likely agora or field where intellectual, discursive and linguistic currents would cross each other.
Labels:
Encyclopedia,
Encyclopedias,
Open content,
Open source,
Web 2.0,
Wikipedia,
Wikipedians
Tuesday, January 29, 2008
Wikipedia and the Fragmented Mirror of Nature
Unlike many previous attempts at capturing some truth about the world, Wikipedia has elected to not impose a prejudicial barrier barring non-elites from joining the process of creating knowledge fit for an encyclopedia. Despite this drastic novelty, Wikipedia presents itself as nothing more than a traditional encyclopedic project, made to create a repository of verifiable, reference knowledge for the betterment of global, civil society. While the sphere of those who can contribute has changed in profound ways, Wikipedia operates on a fundamental principle that, in the end, no matter how many people or views inform the knowledge-creation process, there rests only one version of reality for everyone to attain.
Various Wikipedia writing guidelines suggest that particular viewpoints are limited versions of something much larger, a knowledge transcendent to all biases and blindspots, one that could embody knowledge from all perceivable angles and instantiations:
So when a backlog of unreconciled writing gets bigger and polarizes more and more Wikipedian users, the authors of the guideline assume, then, that people aren't yet ready or mature enough to embark on the Wikipedian mission intended to produce the holy grail to which all Wikipedian collaboration aims for: "the featured article": these are articles deemed by a committee to have met certain writing criteria. Neutrality is one those criteria, yet not a single featured article related to government or politics, that is worth fighting over, has ever made it to the prestigious list.
Empirically speaking, then, Wikipedia is not patching up the great ideological fissures that divide up the world's ideologues, and neutral writing strategies are failing to guide ideological antagonists towards a common place: one which includes, synthesizes, integrates, accommodates and is sensitive to the social situatedness of knowledge artifacts.
In this first section, my goal is to explicate the philosophical incompatibility that exists between Wikipedia's strategy for prescribing writing styles that effect a sense of total awareness (i.e. journalistic notions of objectivity) and an encyclopedic space that is structurally designed to produce monadic representations of reality.
Encyclopedias: structurally inhospitable environments for objective writing
There are many reasons why news media outlets, the original purveyors of disinterested description, are continually able to produce so-called "objective," written accounts of reality. Although this is quickly changing, a news report's purported truth doesn't disintegrate from the prolonged scrutiny of one hundred critical voices the way it can inside Wikipedia. To look at it quantitatively, the less heterogenic and populous the editorial environment, the less time it takes for a textual product to pass the vetting process. A news artefact can assert its own objectivity in the absence of dissenting views from individuals occupying elevated positions in the contemporary public sphere (this will change to the extent that bloggers will continue to acquire attention and respect). In short, the less consciousnesses inhabiting the same space, the less likely a particular impression of reality will encounter its challenge.
But more significantly, the content delivered through an encyclopedia article symbolizes something different than the knowledge claims carried in periodicals. The symbolic difference boils down to the fact that periodicals are "snap shots" of reality whereas encyclopedias are the exact opposite -- they are supposed to withstand the test of universal consensus accumulated over time.
This constraint relates back to the historical function encyclopedias had as tools of reference and introductory learning. Whereas encyclopedias attempt to cover the "aboutness" of a particular thing or phenomenon, a news report concentrates on immediate events that, by virtue of its sharp focus, won't need to address related or contextual information in much depth. Burdened with the ambitious task of integrating and structuring information into a holistic corpus of "human understanding", the information carried within an encyclopedia would be defined just as much by its relationship to other phenomena. In other words, while it may be possible to understand what something "is" by reading a news account, it is only until someone reads about it in an encyclopedia that they can get the sense of what something "is not". Naturally, this adds an additional burden to the task of encyclopedic representation since it would make sense that an integrated/structured/comprehensive picture of the world would be harder to achieve than fleeting snapshots and news reports that relate with/exchange less clearly to adjacent phenomena.
Yet a greater reason exists for why an all-encompassing, transcendent reality remains elusive to the encyclopedic project. The culprit lies in the encyclopedia's ambitious attempt to consolidate reality, by cataloguing it, taming it -- reducing it, from a vast, fluid and multi-perspectival phenomenon. The target is a distilled product, formatted to package information in a way that is topical, segmented, thematic, interlinked, chronological, linear, discursively coherent, consistent in tone and style, etc.
---
What makes Wikipedia the encyclopedia of its age is the almost militant desire to force the conceptual coherence of knowledge products on a global stage. If the Internet cut its globalized audience some slack by allowing ideas to co-exist under a loosely hyper-linked galaxy of documents, than Wikipedia asked from everyone the unthinkable: mass collaboration under claustrophobic conditions.
---
Encyclopedic initiatives point to an attempt by a group of individuals to corral a sea of information into a manageable textual body: an article. By tackling concepts and phenomena that mean many things to many people, the enyclopedia is responsible for capturing via description the polyvalency of its subject. In practice this might involve creating an article on the history of political violence related to the Israeli-Palestinian conflict. The encyclopedist, as the Wikipedist, would believe it possible to create a a definitive account of this topic, no matter how volatile or centripetal the social forces may be that threaten to unravel the body of text into a thousand different ideological strands.
Lying underneath every encyclopedic operation is the act of filtering out information for a distilled knowledge product. There are many methods of arriving to an information-condensed account, be it through excision, elision, grafting, blending and other such acts of reduction.
The second operation is to introduce a structure and order to knowledge.
---
In Wikipedia this difficulty is demonstrated as competing editors disagree over how to arrange and organize certain facts in relation to others. How facts end up getting arranged will, in turn, have an effect on the way they are perceived in terms of significance and importance. This is not to say that in journalism, the structuring of information is trivial. To the contrary, a newspaper's pyramidal structure, with its headline and lead paragraph, can do much to determine the significance to a story's various facts. The issue is simply more pronounced and problematic in Wikipedia, where various communities will attempt to manipulate knowledge outcomes by the way articles are structured and named.
Circumventing Neutrality: loopholes at all levels
Loopholes in the encyclopedic structure
1.Incompatible ontology-knowledge categorization schemes
2.Proliferated/redundant nomenclature (titles/headers) and information (article body)
3.Proportional uncertainty of empirical content
4.Narrative options
5.Discursive chains of significance
to be continued...
Various Wikipedia writing guidelines suggest that particular viewpoints are limited versions of something much larger, a knowledge transcendent to all biases and blindspots, one that could embody knowledge from all perceivable angles and instantiations:
Many sociologists of knowledge have referred to this attitude as "the view from nowhere". For Wikipedia, this means an aspiration to divesting the most recent version of an article, as much as possible, from any one angle or perspective of a represented reality. This is not to say, however, that Wikipedia's designers do think it's actually possible to achieve neutrality:
"...we can agree to present each of the significant views fairly and not assert any one of them as correct. That is what makes an article 'unbiased' or 'neutral' in the sense presented here. To write from a neutral point of view, one presents controversial views without asserting them...Disputes are characterized in Wikipedia; they are not re-enacted."
"If there is anything possibly contentious about the policy along these lines, it is the implication that it is possible to describe disputes in such a way that all the major participants will agree that their views are presented sympathetically and comprehensively. Whether this is possible is an empirical question, not a philosophical one."The author(s) of this guideline seem to be suggesting that neutrality is something that is, if not achievable, at least potentially workable, as if perfection stood at the end of a linear progression from blind, to aware, and finally, to all-seeing.
So when a backlog of unreconciled writing gets bigger and polarizes more and more Wikipedian users, the authors of the guideline assume, then, that people aren't yet ready or mature enough to embark on the Wikipedian mission intended to produce the holy grail to which all Wikipedian collaboration aims for: "the featured article": these are articles deemed by a committee to have met certain writing criteria. Neutrality is one those criteria, yet not a single featured article related to government or politics, that is worth fighting over, has ever made it to the prestigious list.
Empirically speaking, then, Wikipedia is not patching up the great ideological fissures that divide up the world's ideologues, and neutral writing strategies are failing to guide ideological antagonists towards a common place: one which includes, synthesizes, integrates, accommodates and is sensitive to the social situatedness of knowledge artifacts.
In this first section, my goal is to explicate the philosophical incompatibility that exists between Wikipedia's strategy for prescribing writing styles that effect a sense of total awareness (i.e. journalistic notions of objectivity) and an encyclopedic space that is structurally designed to produce monadic representations of reality.
Encyclopedias: structurally inhospitable environments for objective writing
There are many reasons why news media outlets, the original purveyors of disinterested description, are continually able to produce so-called "objective," written accounts of reality. Although this is quickly changing, a news report's purported truth doesn't disintegrate from the prolonged scrutiny of one hundred critical voices the way it can inside Wikipedia. To look at it quantitatively, the less heterogenic and populous the editorial environment, the less time it takes for a textual product to pass the vetting process. A news artefact can assert its own objectivity in the absence of dissenting views from individuals occupying elevated positions in the contemporary public sphere (this will change to the extent that bloggers will continue to acquire attention and respect). In short, the less consciousnesses inhabiting the same space, the less likely a particular impression of reality will encounter its challenge.
But more significantly, the content delivered through an encyclopedia article symbolizes something different than the knowledge claims carried in periodicals. The symbolic difference boils down to the fact that periodicals are "snap shots" of reality whereas encyclopedias are the exact opposite -- they are supposed to withstand the test of universal consensus accumulated over time.
This constraint relates back to the historical function encyclopedias had as tools of reference and introductory learning. Whereas encyclopedias attempt to cover the "aboutness" of a particular thing or phenomenon, a news report concentrates on immediate events that, by virtue of its sharp focus, won't need to address related or contextual information in much depth. Burdened with the ambitious task of integrating and structuring information into a holistic corpus of "human understanding", the information carried within an encyclopedia would be defined just as much by its relationship to other phenomena. In other words, while it may be possible to understand what something "is" by reading a news account, it is only until someone reads about it in an encyclopedia that they can get the sense of what something "is not". Naturally, this adds an additional burden to the task of encyclopedic representation since it would make sense that an integrated/structured/comprehensive picture of the world would be harder to achieve than fleeting snapshots and news reports that relate with/exchange less clearly to adjacent phenomena.
Yet a greater reason exists for why an all-encompassing, transcendent reality remains elusive to the encyclopedic project. The culprit lies in the encyclopedia's ambitious attempt to consolidate reality, by cataloguing it, taming it -- reducing it, from a vast, fluid and multi-perspectival phenomenon. The target is a distilled product, formatted to package information in a way that is topical, segmented, thematic, interlinked, chronological, linear, discursively coherent, consistent in tone and style, etc.
---
What makes Wikipedia the encyclopedia of its age is the almost militant desire to force the conceptual coherence of knowledge products on a global stage. If the Internet cut its globalized audience some slack by allowing ideas to co-exist under a loosely hyper-linked galaxy of documents, than Wikipedia asked from everyone the unthinkable: mass collaboration under claustrophobic conditions.
---
Encyclopedic initiatives point to an attempt by a group of individuals to corral a sea of information into a manageable textual body: an article. By tackling concepts and phenomena that mean many things to many people, the enyclopedia is responsible for capturing via description the polyvalency of its subject. In practice this might involve creating an article on the history of political violence related to the Israeli-Palestinian conflict. The encyclopedist, as the Wikipedist, would believe it possible to create a a definitive account of this topic, no matter how volatile or centripetal the social forces may be that threaten to unravel the body of text into a thousand different ideological strands.
Lying underneath every encyclopedic operation is the act of filtering out information for a distilled knowledge product. There are many methods of arriving to an information-condensed account, be it through excision, elision, grafting, blending and other such acts of reduction.
The second operation is to introduce a structure and order to knowledge.
---
In Wikipedia this difficulty is demonstrated as competing editors disagree over how to arrange and organize certain facts in relation to others. How facts end up getting arranged will, in turn, have an effect on the way they are perceived in terms of significance and importance. This is not to say that in journalism, the structuring of information is trivial. To the contrary, a newspaper's pyramidal structure, with its headline and lead paragraph, can do much to determine the significance to a story's various facts. The issue is simply more pronounced and problematic in Wikipedia, where various communities will attempt to manipulate knowledge outcomes by the way articles are structured and named.
Circumventing Neutrality: loopholes at all levels
Loopholes in the encyclopedic structure
1.Incompatible ontology-knowledge categorization schemes
2.Proliferated/redundant nomenclature (titles/headers) and information (article body)
3.Proportional uncertainty of empirical content
4.Narrative options
5.Discursive chains of significance
to be continued...
Subscribe to:
Posts (Atom)