.comment-link {margin-left:.6em;}

Alchemical Musings

This blog has moved to a new location - http://alchemicalmusings.org

Sunday, July 23, 2006

The Alchemist has moved!

As promised, the semester is over and this blog has moved to a new location - http://alchemicalmusings.org (rss).

I have turned off commenting on this site, and transfered all posts to the new location, but will still leave this blog up for reference.

Thanks for playing.
/Jonah

Sunday, May 14, 2006

Held together with Glureed

I am bummed at the failure of politicians and the media to connect the issue of Net Neutrality to the issue of China's internet censorship. The issue of internet censorship in China led to congressional hearings where:

"The House International Relations subcommittee's top Democrat, Tom Lantos, told representatives of the companies that they had accumulated great wealth and power, "but apparently very little social responsibility".

"Your abhorrent actions in China are a disgrace. I simply don't understand how your corporate leadership sleeps at night," the Associated Press quoted him as saying." (bbc news)

Meanwhile, on the home front, we fail to recognize censorship under the guise of its free market counterpart --
"In today's sausage factory of knowledge production, that is exactly the situation that we face. Dominant groups explain the world through their control of knowledge production. Subordinate groups are excluded, and as a result, subordinate knowledeges are excluded as well. In liberal societies, these knowledge disqualifications are not achieved primarily through the legal authority of censorship. But as Foucault reminds us, these disqualifications are made by the 'ensemble of the rules according to which the true and the false are separated and specific effects of power are attached to the true.'" (The Birth of Postpsychiatry, p. 139)

Free and open discourse is under attack, in the homeland. Just ask a ninja:

YouTube - Ask A Ninja Special Delivery 4 "Net Neutrality"




And here is something you can do:

Save the Net

Sunday, April 16, 2006

Turtle Totems

Seymour Papert , the inventor of Logo, spoke at Teachers College on Monday April 10th. I was lucky enough to hear him talk in a standing-room-only event. My former employer, Idit Caperton
studied with Papert, and MaMaMedia incorporated many of the principles he advocated.

His ideas, once stated, are remarkably simple and obvious--usually a mark of the good ones. He thinks we are teaching mathematics ass-backwards, and that we ought to introduce it the way it came about in the history of humanity - engineering first. This approach will create and foster the demand for mathematics. Pyramids, navigation, astronomy, all drove the development of mathematics - and robotics and programming can provoke and instigate the need for mathematical abstraction in education. Sounds about right.

Interestingly, his experiments have led to anecdotal accounts of a reversal of the gender discrepancy in science/math. He claims with an engineering first approach, girls actually quickly excel beyond the boys, venturing beyond speed and destruction to the mastery of a much wider variety of skills with the systems.

He also demonstrated, in 10 minutes flat, how logo can be used to teach 2nd graders the notion of a mathematical theorem (in creating any closed shape, the turtle will rotate through a full 360 degrees - repeat N {fd 10 rt 360/N}) as well as how to introduce calculus (through the idea of the limit). He made the point that once a second grader is arguing -- "that's not a circle, its lots and lots of short lines", you have already won...

If logo has a failing, its that it does not provide the necessary scaffolding for teachers other than Papert to effectively teach with it. I have been exposed to logo in the past, but never really understood its appeal until Seymour started turtling.

Interestingly, Logo is far from irrelevant. Mark Shuttleworth's ClassroomCoders curriculum imagines a logo->squeak->python pipeline for educating the programmers of the future...

Seymour is also heavily involved in the $100 laptop project, a project which many consider to be one of the most important educational initiatives currently underway.

soft metamedia?

April 7th I heard Lev Manovich talk at Pratt. I am a big fan of Manovich's written work, and the Language of New Media was instrumental in my analysis of tagging.

Friday night Manovich showed us ideas in progress, and bravely admitted that they were not completely formed. He talked about describing the evolution of media in evolutionary terms. As in, the next logical progression after getting all our media digitized (i.e., simulating physical processes w/in the digital environment) is the breeding and hybridization of the media. He is claiming that some of what we are now seeing in 'moving graphics' or 'design cinema' is actually a new form of media, distinct from what came before it. And he is interested in identifying the trunks and branches of this media evolution.

Plaid Itsu was a film he used as an example of a completely new form. Whereas multimedia was the assembly of multiple forms of media adjacent to each other, metamedia is the combination of these forms into a new unified whole. He pointed out the live action photography, combined with traditional design aesthetics, combined with graphics, etc etc. Not sure I bought it, but it was an interesting assertion.

The best question from the audience alluded to a longstanding disconnect between media and communication theorists. Manovich is looking exclusively at the end product of the media being created, and not examining the cultural and social conditions that lead to its creation. There may be mileage from this rarefied approach, as some patterns are discernible, but it does seem to be lacking the depth to explain the creative dynamics and underlying motivations.

After the talk, I began to this relate his line of reasoning to Arthur Young's theory of process:

The Theory of Evolutionary Process as a Unifying Paradigm
Theory of Process Poster (too bad this isn't really visible online)

Which I first became exposed to through the work of the Meru Foundation:
letter matrix

It seems to me that the evolutionary forces that Manovich is documenting conform to the trans-disciplinary evolutionary process that Young articulated. For what its worth, the hybridization of media that Manovich claims we failed to predict, was foretold back in this book on the MIT Media Lab, published in 1988.

Monday, April 03, 2006

Another New Kind of Science?

Last weekend's Cultural Studies conference reminded me of a viscous cycle that many humanities-oriented researchers are being subjected to. Disciplines such as educational research, ethnography, anthropology, cultural studies, sociology etc have effectively been colonized by the methodology of the social sciences and they are being forced to play a numbers game which they may not be suited for.

Many projects striving for credibility are subjected to the tyranny of statistics - forced to transform their qualitative information (interviews, transcripts, first person accounts) into quantitative information through the process of coding. This reduction forces the data into buckets and creates a significant degree of signal loss, all in the name of a few percentages and pie-charts.

Perhaps we have lost sight of the motivation for this reduction - the substantiation of a recognizable, narrative account of a phenomena, supporting an argument. Arguably, the purpose of the number crunching is to provide supporting evidence for a demonstrable narrative. Modern visualization techniques may be able to provide one without all the hassle.

True, this is not always the only reason that qualitative is transformed into quantitative data, but advanced visualization techniques may provide a hybrid form that is more palatable to many of the researchers active in this area, and is still a credible methodology. It seems as if many people are being forced into coding and quantification, when they aren't thrilled to be doing so. But the signal loss that coding is responsible for, all in the name of measuring, might be unnecessary if people think about using data visualization tools, that comprehensibly present the data, in all of its richness and complexity, as opposed to boiling it down to chi-squared confidence levels (and does this false precision actually make any difference? Does a result of 0.44 vs. 0.53 tell significantly different stories?)

In a thought provoking post on the future of science, Kelly enumerates many of the ways new computing paradigms and interactive forms of communications might transform science. The device that I am proposing here might lead to some of the outcomes Kelly proposes.

For a better idea of the kinds of visualization tools I am imagining, consider some of the visualization work on large email corpora coming out of the M.I.T. media lab, or the history flow tool for analyzing wiki collaborations, but even the humble tag cloud could be adapted for these purposes, as the power of words and visualizing the state of the union demonstrate.

Crucially, tools analogous to Plone's haystack Product (built on top of the free libots auto-classification/summarizer library) might help do for social science research what auto-sequencing techniques have done for biology (when I was a kid, gene sequences needed to be painstakingly discovered "manually").

The law firms that need to process thousands of documents in discovery and the commercial vendors developing the next generation of email clients are already hip to this problem - when will the sciences catch up?

For any of this to happen the current academic structure needs to be challenged. The power of journals is already under attack, but professors who already have tenure can take the lead here and pave the road for their students to follow.

Saturday, April 01, 2006

Permanent Records

Today I presented last year's bioport Part II paper to the 2nd annual Cultural Studies conference at Teachers College.

Permanent Records: Personal, Cultural, and Social Implications of Pervasive Omniscient Surveillance

I think the distilled version of this model if far more digestible and accessible than the papers.

One of my co-panelists is doing some really interesting work with urban
youth in the bronx, and gathering incredible interview materials about
the perceptions of surveillance by these youth, and their forms of
resistance. These stories might help convey the violence of a
surveillance society.

The conference format was a bit disappointing. I can barely believe academics still read their papers to each other at conferences - there are so many things that Open Source does right, including, knowing how to throw a great conference. Even the variety of presentation formats is an idea that needs to spread - BOFs, lighting talks, presentations and posters all create different spaces and dynamics for interactions between participants. The traditional model is so intimidating that it seems like many people are discouraged from participating.

More importantly, the social justice issues and governance models that are being explored by F/OSS projects are really important for the Cultural/Critical studies folks to be considering. It is also shocking how disconnected they are from the freeculture movement, and its theoretical roots. Arguably, the freeculture movement is a shadow struggle, mirroring the struggles for sustainability, and against globalization and the logic of capitalism being conducted in the physical world. But, it may also represent the actual ground on which that struggle is being conducted.

Sunday, March 12, 2006

Saints in the Church of Writely?


Two months back I saw Richard Stallman talk at a NYC Gnubies event and I asked him a question that I have been thinking alot about lately -- Would a Saint in the Church of Emacs use gmail?

To me the question revolves around the growing threat that 3rd party webservices poses to the freedoms that free software is designed to protect. In O'Reilly's What is Web 2.0 he argues that software is transitioning from an artifact to a service, and that data is becoming the new "intel inside". In an age when applications have become commodities, could the freedom of my data (in an open format) be interchangeable with the freedom of software?

I recently listened to the Chief Open Source Officer at Sun Mircosystems pose a similar question in his talk, The Zen of Free. He talks about the importance of Open Software implementing Open Standards, which is close to the idea I have been advocating, but doesn't quite go far enough.

Using free (as in beer) third party web services is very tempting, but I am worrying more and more about the traditional freedoms that free software protects against - vendor lock-in, proprietary data formats, and freedom to modify policy according to application specific requirements.

I would be less antsy about using web 2.0 apps if I had some assurance that I could get my data back out without screenscraping a bunch of html pages. Even services with APIs like flickr and delicious create vulnerabilities, as I was loathe to discover last week. Delicious provides a programmers api, but its api only exposes methods which operate on a single user. Thus, if you want to export a collection of links that have all been tagged with a particular tag, (reasonable if you are engaged with a community in distributed research) you are back to screenscraping!

These considerations and more advocate for the need for free (as in speech) versions of many of these services. There are certainly some side-effects of running a centralized service that are inherent in it being centralized, but many communities are making use of these "public" services because of their convenience, and the ease with which they can be "mashed up."

Which brings me back to the design that we have been thinking alot about at work lately. Anders and I presented a talk at pycon demonstrating some of these ideas. Anders did a great job writing our talk up here:

Tasty Lightning

Crucially, it is imperative not to conflate our advocacy for building components that expose themselves as webservices with building apps against third-party web services. The design we describe resembles a traditional mash-up, except the components involved are locally controlled as opposed to relying upon external, corporate services. For all the usual f/oss reasons it can be important to "own" and run your own services.

But this argument also has everything in the world to do with Ulises In Defense of the Digital Divide as Paralogy essay. In this essay Ulises grapples with Lyotard's critique of new media under the logic of capitalism which has "established commodification and efficiency as the ultimate measures of the value of knowledge."

he continues:

...Lyotard states, in the final passage of The Postmodern Condition, that new media technologies can be more than simply tools of market capitalism, for they can be used to supply groups with the information needed to question and undermine dominant metaprescriptives (or what might be called ‘grand narratives’). The preferred choice of development, for him at least, is thus clear: ‘The line to follow for computerization to take . . . is, in principle, quite simple: give the public free access to the memory and data banks’ (Lyotard 1984: 67). (Gane, 2003, p.9)
Considering Google's stated ambitions to "house all user files, including: emails, web history, pitcures, bookmakres, etc" the freedom movement better wake up to the fact that there is more to freedom than free software, and we are being outflanked.

Free software is only one corner peice of this puzzle - to complete the jigsaw we need the corners of free data, in a free format. Anything else?

(yes, I know I am posting this question using blogger - a situation I hope to remedy after the semester finishes).

Thursday, March 02, 2006

Out of Context

Today I saw Ted Selker present a talk on "Context-Aware Computing: Understanding and Responding to Human Intention" His perspective on inventions resonated strongly with my recent thinking on social interfaces and software as architecture, and in turn, ideology.

Ted is helping to create a world where intelligence is everywhere, transparently. People joke about toaster oven's with IP addresses, but you ain't seen nothing yet.

A few of the examples really stuck out though - intelligent doors that give different people different messages about the availability of the inhabitant, tools that help people manage their relationships better (e.g. themail, clustering and color coding emails, rather than putting them in buckets), and a great little anecdote about doctors who don't wash their hands before examinations.

In this last case, a hospital approached the lab asking for some high tech solution to insure that doctors washed before procedures. They used to have human supervisors (union, I'm sure) standing by the sink, and were envisioning some sort of rfid-cybercop-surveillance solution. Instead, Ted and his team designed an electronic doorstop. The examination room door would not close until the doctors washed their hands for at least 20 seconds.

Ted has a background in cog-sci and is acutely aware (the whole media lab seems to be) of the ways in which technology is becoming a leading art, and ways in which behavior can influence worldview. I wish this understanding was more widespread.

A few other thoughts -

Ted's characterization of inventing as adventure movie, moving "at the speed of physics" reminded me alot of extreme programming - release early, release often, embrace change, favor improvisation over the paralysis that comes with the heft of over-engineering and over-designing.

Many of his UI strategies seemed to draw heavily from techniques I first learned about reading The Art of Memory (also echoed in research suggesting larger screens improve efficiency).

Also notable is how this approach of transparent, cognitive prosthesis contrasts with the UI the informedia group presented. Their Visual Query Interface presents the user with sliders allowing them to interact with the system to fine tune the strictness of the computer's judgment. This mixed mode of interaction seems to differ fundamentally from the approach the contextual computing team is taking.

Friday, February 03, 2006

all work, all play

Last Friday CCNMTL hosted a mini-conference on New Media and Education (pics). Me and my colleague Dan Beeby co-presented a marathon series of workshops on Sakai and Web Services. We repeated each of our two 35 minute talks 3 times over the day (2x3 talks == a very long day), and I can't wait for the video's to be published so I can see the rest of the conference ;-)

The first talk unfolded into a conversation about Course/Content Mgmt systems, open/community source ecologies, and the purposeful use of tools w/in those environments. The second talk covered rss, blogging, delicious, flickr, odeo, and the balance between push and pull. The participants were attentive and engaged, and I although the pace was brutal, I really enjoyed working on these presentations.

The funny thing about giving 6 talks in one day, is that by the third talk in, I couldn't remember if I had used a particular phrase two slides back, or two hours back... Luckily, Dan and I knew the material cold, had a good rapport, and were very comfortable swapping lines and improvising. The only glitch was due to flickr not refreshing their feed for over 24 hours... can't expect much more from an external service (more on that in a future post).

The slides got a little mangled on the html export, but here they are: An Instructors Guide to Sakai & Courseworks Remodeled.

Dan has a great touch in photoshop, so careful what sorts of pictures you leave laying around his desk.

Monday, January 16, 2006

A red guitar, 3 chords, and the truth

This weekend I participated in the NYC free culture summit and learned a few refreshing radical activism tricks from the class of '06.

In stark contrast to the scholarly focus group I attended last week, this group explicitly understands that they need to create social spaces for like-minded activists to congregate, learn, and plot. The tools of the revolution were revealed in the speed geeking session - Once someone in the 21st century finds the truth, all they need is a mailing list, a blog, a wiki, irc, and rss (with a dash of delicious and flickr, to taste). Remarkable how quickly and easily people with real communication needs figure out how to use this suite of tools, understand which is good for what and when.

Highlights included a Riot Folk performance, a talk by Siva ("Space. Hope. Imagination. Potential."), a talk by the Creative Commons gang, and suprise appearance by Cory Doctorow .

The most fun had to be not-protesting (you need a license to protest) outside of Time Sqaure's Virgin Megastore, and reverse shoplifting DRM info into the stacks of damaged cds.

The revolution might not be televised, but it could very well end up on flickr.

Friday, January 13, 2006

His Master's Voice

I recently read that Guglielmo Marconi envisioned the radio being used primarily for 2-way communications, and Alexandar Graham Bell imagined the telephone being used to broadcast concerts to large audiences. Whether or not this is true, it's interesting to wonder if the inventors of technology are really the best at predicting its eventual usage.

Today I attended a focus group organized by the Marconi Society and EPIC which focused on the next generation of scholarly tools, and the future of research and the journal. Most people in the room were completely overwhelmed by the amount of information they were supposed to track, and many thought that better filtering tools would help. People also talked about the real problem of knowledge quality and credibility, and some sort of map for navigating the various layers of information in the world.

What I kept hearing in people's remarks was that people really need spaces, not maps. Researchers need virtual watering holes to gather around. The quest for knowledge is not a search for data, it is arrived at through dialectic. Communities of like minded researches will naturally perform the task of filtering, highlighting, and vetting important information. It will take AI a long long time to accomplish the comparable task with advanced search and filtering portals....

Seems to me like the Marconi Society should consider funding the development of a specialized distribution of a well established CMS, perhaps modeled on drupal's CivicSpace, or Shuttleworth's SchoolTool. CivicSpace is basically a drupal bundled and configured with some modules that are geared towards operating an NGO. SchoolTool a Zope3 app designed for operating a small-mid size k12 school. The work might also benefit from considering the social software design patterns we worked on in Ulises' course this past fall.

I also met some really cool people, doing really interesting and socially important work with technology.

Sunday, December 18, 2005

Closing Thoughts on MSTU 5510

Ulises recently asked us to summarize our thoughts for the semester in our blogs. Considering that this blog was started for this class, I was surprised by my own initial resentment at being asked to post something so specific here. During the course of the semester, this forum has become a place for me to speak, not to answer. Even when I was posting assignments for class, they were items and issues which I selected and chose. This initial emotional reaction indicates how engaging these tools can become, and helped me answer some of the questions on Uilses' list.

Its been great fun! Best of luck to everyone, and see you on Tuesday.

>> What is 'social' about social software?
to paraphrase: Social Software is made of people
>> How is the notion of community being redefined by social software?
>> What aspects of our humanity stand to gain or suffer as a result of our use of and reliance on social software?

Radical redefinitions of memory, identity, personal space, intimacy, and physical interaction.

>> How is social agency shared between humans and (computer) code in social software?

>> What are the social repercussions of unequal access to social software? >> What are the pedagogical implications of social software for education?

stay tuned.
>> Can social software be an effective tool for individual and social change?

See above. I think the pedagogical value of a tool follows from its potential for individual and social change.

Friday, December 16, 2005

Happy Holidays!




The semester is almost over, and that means its time for me to compose some thoughts. As usual, this opens more questions than it answers, but I'm pretty happy about how it turned out.

Collecting Knowledge: Narrative Tapestries and Database Substrates

"An examination of Web 2.0 using Manovich’s Language of New Media, and an interpretation of folksonomies within the context of the narrative-database dichotomy. This inquiry looks at tagging as a mechanism for constructing narratives from databases, and relates narratives to knowledge construction and representation. Educational curricular activities involving tagging will also be considered."

Special thanks to Prof. John Broughton, John Frankfurt, Michael Preston, and Alexander Sherman for helping me develop these ideas.

Tuesday, December 06, 2005

Pimp my dilapidated, third-world, ambulance

On Tuesday November 29th I attended a presentation of The Diary of Angelina Jolie and Dr. Jeffery Sachs in Africa (watch it here). Angelina couldn't make it, but Sachs (author of The End of Poverty) is a rock star in his own right, and it was the first time I have ever seen him talk.

He is an energetic and inspirational leader, who still believes we have the power to make the world a better place, and is actively working on operationalizing this vision. Some may be skeptical about MTV's pro-social initiative, think.mtv.com, but whatever their corporate parent's intentions, it has the potential to do some real good.

Notable moments included Dr. Sachs using the phrases "Open-Source politics" and the "wikipedia of foreign policy" to refer to an emerging form of democratic self-determination. It was also great when an audience member questioned an mtv vp on sending a pimp team over to kenya to help them fix the village's only ambulance.

Monday, November 21, 2005

"Michael, are you sure you want to do that?"

Pull over Kitt - you've just been lapped.

On Monday November 14th I attended a presentation by Sebastian Thrun, an AI researcher at Stanford U. whose team recently won the Darpa Grand Challenge.

The idea behind the Grand Challenge is to accomplish something that seems impossible, along the lines of crossing the Atlantic, the X-prize, etc. Darpa had previously funded cars that drive themselves, but after numerous failures decided to turn the task into a contest and see how far teams would get in a competitive setting. Last year none of the entrants managed to finish the course, but this year 5 finished, 4 within the alloted time.

The difference between last year and this year was primarily improvements in software, not hardware. In fact, once the software has been developed, outfitting a car with the necessary equipment to drive itself (the perceptual apparatus - laser, radar, and video guidance, the gps, the inertial motion systems, the general purpose computing servers, and the fly-by-wire control systems), were estimated by Sebastian to cost < $5k once they are mass produced.

Sebastian spent a long time conveying how hard it is to teach a computer to answer the question "What is a road?" The entire time the audience was left wondering - how the @(*^*#@ do we do that?

One of the issues that tripped up the lasers is the systematic pitching forward and backwards of the car as it bumps over the terrain - this causes the perceptual systems to jerk back and forth, perceiving parts of the ground over again, and mistaking these discrepancies for obstacles. No inertial guidance system is precise enough to correct for these errors, and their team won by discovering a systematic regularity in the errors themselves (and a probabilistic model to capture them). Instead of finding theoretically precise values for the constants in their equations, the Stanford team tuned these parameters by actually driving the vehicle, and "tagging" safe terrain. During the training, they also took calculated risks, let bad things happen, and looked back at the earlier optical flow.

Sebastian is aware of the military applications driving (sic) the development of this technology, but is personally motivated by the lives he thinks can be saved in civilian applications. In fact, boasted that he was now planning on having a vehicle drive itself from San Francisco to Los Angeles by 2007! I am beginning to wonder how much longer it will be legal for humans to operate motor vehicles.

When asked if any of the team's research findings would be applicable elsewhere in CS, Sebastian replied that he had no idea, yet. His philosophy is to first build the system, and then afterwards spend years pouring over the data to figure out what happened. In case you hadn't realized it yet, the robots are already here (some of them killer)!

Tuesday, November 15, 2005

New York's Darker History

This weekend I attended the masterfully produced Slavery in New York exhibit at the New York Historical Society. The exhibit was deeply moving, and vividly and viscerally captured a portrait of African American history I was not fully aware of previously. I left the exhibit with a new understanding of how the 400 year long institution of slavery was a tragedy fully on par with the Nazi Holacaust.

I will save a discussion of the show's content for another time, but for now I want to focus on the amazing use of educational technology woven throughout the exhibit. From start to finish, the show effectively incorporated video, interactive kiosks, and innovative displays which pushed the boundaries of some of the best work I have seen in this field.

The use of screens is a topic that is on my mind from my studies of Lev Manovich this semester, and this exhibit incorporated many cutting edge treatments of the screen.

To start with, at the beginning of the exhibit, the visitor is confronted with video commentary of the reactions of past visitors, and at the end of the exhibit a self-service video booth allowed visitors to record their own commentary. I have never seen a self-service video booth like this incorporated into an museum exhibition, and it was very powerful and impressive.

Beyond that, their ability to transport the visitor to the reality of the past was greatly enhanced by their translation of historical abstractions to modern day interfaces. In particular, I am thinking of the classified ads advertising slaves for sale and offering rewards for runaways, the presentation of the slave ship logs, and most strikingly, the presentation of the slave economy in a bloomberg-style terminal. The cold economics of slavery were driven home by the scrolling marquee listing the numbers of Negros arriving on incoming ships, and the fluctuating going rates of various skills.

The incorporation of video throughout the exhibit, from overhearing the conversation of slaves gathered around a well (in a brilliant interface), to the dialogue between the portraits of ornately framed talking heads, to the interactive choose-your-own-adventure kiosks was incredibly well done, and offered accessibility and deep learning even to the fragmented attentions of the postmodern era.

I highly recommend visiting this exhibition, as the web site barely begins to do it justice.

Wednesday, November 09, 2005

Wikibases and the Collaboration Index

On October 27th I attended a University Seminar presented by Mark Phillipson. The seminar was lively and well attended, and Mark managed to connect the culture of wikis with their open source roots.

Sometime soon I plan on elaborating on ways in which software, as a form of creative expression, inevitably expresses the values of the creators in the form of features. But right now I want to focus on the taxonomy of educational wiki implementations that Mark has identified since he began working with them.

Here is how Mark divides up the space of educational wikis
  • Repository/reference - eg Wikipedia
    • A website whose primary function is to create a repository of knowledge on a particular topic.
  • Gateway - eg SaratogaCensus
    • A website whose primary function is to collect, assemble, and present references to external sources
  • Simulation/role playing - eg Holocaust Wiki
    • A "choose-your-own-adventure" style simulation/game environment
  • 'Illuminated'/mark-up - eg The Romantic Audience Projects
    • An environment that provides tools for detailed exegesis on primary sources, where the students are instructed to leave the source material unchanged, and create subpages with detailed commentary on supplemental pages.
I think this taxonomy is accurate, but doesn't completely capture one of the most interesting educational implications of wikis - the process of creating them.

In particular, I can think of a number of variations on the repository/reference wiki, where the final products might all look similar, but where the "collaboration index" might differ substantially (for more on the popularity of the repository/reference, see Database as a Symbolic form, Manovich 2001).

Wikis are a very flexible tool, whose usage can vary from a personal publishing tool, to a simple Content Management System, to a collaborative authoring environment. Additionally, while wiki software doesn't usually support the enforcement of a strict workflow, policy can be stipulated and adhered to by convention (like in Mark's class, where the original poems were meant to be left intact).

Consider a few different applications of reference wikis in the classroom:
  • One way Publishing
    • A simple means for instructors to publish and organize information for their class.
    • Examples include:
      • Instructional handbooks, assignment "databases", completed examples
  • Collaborative Mini-sites and/or subsections
    • Exercises where individuals or groups work on subsections of a wiki which are combined and referenced within a single larger site
    • Examples include:
      • Students dividing large assignments amongst themselves, each sharing their own results with the group.
      • A site like the social justice wiki where groups of 3-4 students each worked on a reference element of the site.
  • Collaborative Websites
    • Sections of the site where everyone in the community is supposed to be contributing content
    • Examples include:
      • Common Resources, Glossary of Terms, and the larger information architecture and organization of the entire site.
  • Portals and Meta-tasks
    • Also, consider that due to their flexibility, many wikis end up being repurposed beyond their original conception, and begin to serve as portals, where many meta-issues and conversations can take place beyond the assembly of the content itself. Some of these tasks include mundane administrative work, like students forming groups, coordinating assignments, taking minutes, and scheduling time.

While the end results of many of these collaborations might certainly all look similar to each other , perhaps the differences in the process by which this content is developed is crucial in capturing part of what is happening with wikis in the classroom.

This analysis probably also has implications relating to the archiving and the use of a wiki environment in a classroom over time. If the act of creating the wiki is central to what the students are supposed to learn from the exercise, then should they start with a fresh wiki every semester? How is the experience different when they are contributing to an existing system (or even have access to prior versions of the project)?

For more on this, see Mark's comment's on CCNMTL's musings blog.

Sunday, October 23, 2005

Fraternal Nearness

In his post Social agency and the intersection of communities and networks, Ulises Mejias expounds on the differences between communities and networks, and relates these concepts to the possibility of ontological nearness. The placement of communities within this continuum can be understood more clearly by the immediacy, intensity and intimacy of the interactions.

This conceptual apparatus is helpful for me to being to explain a phenomena that I have been thinking about for a while now. Part of the question can be though about as: What motivates the open source developer? Why would someone who works full time, often writing code professionally, choose to volunteer their nights and weekends to the continued production of more code?

I think this question is an important one for the educational community, since if we could identify this source of motivation, we might be able to "bottle it" and recreate it within the classroom.

My experiences with the Plone community has given me some insight into this question, and I think that the phenomena of Open Source projects would benefit from an analysis using the ideas proposed in Mejias' draft.

While many people imagine that open source communities are purely virtual (the non-possibility of a virtual community notwithstanding) , it is important to recognize the ways in which these networks of individual developers become communities. Open Source projects typically use a variety of Social Software tools to communicate - email and mailing lists, web sites, forums, discussion boards, blogs, and irc, to name a few. They also often hold face-to-face conferences, and some projects even regularly arrange sprints (also).

Anecdotally, I found it fascinating to observe a progression in intimacy, to the point where some people's day jobs are just what they do between conferences and sprints. It is no secret that sprints and conferences help make these communities function, cementing interactions over mailing lists and irc.

But an interesting comparison that I would like to propose, which I think can also be described according to the dimensions proposed by Schutz, is the similarity between an Open Source community and a college Fraternity.

[Disclaimer: I was never in a college fraternity, so this analysis is partially speculative]

Fraternities (and I suppose professional guilds and/or unions which they might be related to) are an example of an extended network/community which is disappearing from the modern urban reality. Some people find these kinds of connections in religious congregations, but otherwise many of us have lost the extended networks of people we know, but not intimately or closely.

Like fraternities, Open Source projects typically have a steep gender imbalance, members often go by aliases or nicknames, develop internal languages, acronyms, and lore. The "project" or "organization" becomes an independent object of importance that members become loyal to, and devote their time and resources to supporting.

Eric Raymond has written a bit on the motivations and structure of the hacker community. I have also heard alternate accounts of developer motivation, beyond status and recognition, that have to do with escape from "reality" and immersion in an environment that the developer completely controls. There are many potent sociological, ethnographic, and anthropological research questions that this touches on, many under active research (e.g. Effective work practices for Free and Open Source Software development, or wikipedia's research pages).

In summary, I think that Mejias' framework is very useful, but would benefit greatly from more examples which exercise the ideas. Perhaps we can work these categories into our ssa wiki.

Friday, October 14, 2005

slipery handles

Today I leared that a friend of mine changes her IM handle every time she switches jobs. That's nothing, she changes emails every time a relationship ends.

I don't know why or when she started doing this, but the more I think about it, the more sense it makes.

"Because its your music, and you paid for it"

This afternoon I attended a talk given by Bill Gates at Columbia University. The talk was a part of his university tour, probably prompted by the well documented braindrain happening at MS right now (Certain well known competitors seem to be following the strategy outlined in Good to Great - get the smartest people you can find "on the bus", and then let them drive...).

Here are my raw notes.

I must say that this afternoon's talk was a bizarre experience. Perhaps its all the theory stuff I have been reading lately, but I was in a very psychoanalytic, read between the lines, kind of mood, trying to pay as much attention to what he didn't say, as to what he did.

First, he has clearly taken some lessons from Steve Jobs. He presented casually and demoed live software. One big difference - while Jobs enjoys demoing creative authoring tools, Gates spends most of his time demoing tools of consumption. He continues to treat his gadgets as receivers, not transmitters, and this is all getting a bit tiring.

Next, close to all the software contexts he described were business and work related. There was very little talk about socializing or play (save for the xbox, and socializing in that virtual space). It was eerie that when someone asked him what his greatest accomplishments were, he responded how much he loved work (and working at his foundation). All of his examples for the uses of ubiquitous computing were work/consumer related (auto tracking receipts for expense reports, shopping, collecting business cards when traveling, Location info - while in traffic (presumably while commuting)) -- this is all summed up with his grand vision of the future smartphone as replacement for wallet.

Isn't there something else the phone could replace? Could our phones become surrogate brains, man's best friend, or personal assistants? Can't we conjure up a better metaphor than wallets for how software will change the world? Will it do anything beyond making us better and more efficient shoppers?

The talk kept getting weirder - Gates played a video, which most of the audience thought was very funny. I will have to save my analysis for my Media and Cultural Theory class (or the comments), but it really threw me off.

Gates never mentioned Google, Firefox, or Linux. Did acknowledge the wikipedia (by name), freebsd, sendmail, and the NSCA browser. He even made two truly surprising statements regarding IP - after demoing that the new XBox 360 will connect to an IPod, an audience member asked if it would be able to play fairplay protected ACC files. Gates responded that it won't be able to, because Apple won't let him (Ha!), to which he added "its your music and you paid for it." He also stated that "studios have gone overboard in protection scheme", and " will always have free and commercial software."

Before the session, they passed around cards with potential questions (I am still not sure if the questioners were plants, reading these cards...).

Here were my, never asked questions:
1) Technology can bend towards good or evil. What can we do to insure that it is used for the Good? What is M$ doing to promote the use of its software for the Good.

2) In the upcoming world of omniscient surveillance, what role will M$ play in insuring individuality, privacy, and anonymity. What is M$ doing to contribute

Friday, October 07, 2005

Serenity Lost

Nothing like a little pulp sci-fi to resonate with a class on emerging tech. I saw Serenity tonight (skip this post until you have seen it, unless you aren't planning to at all) and was amused at how a central plot line revolved around some information that has been covered up by the authorities, and the struggle to disseminate that message.

The simplicity of a single message whose content can change the world, and a single distribution channel from which to broadcast it from is amusing, but poignant. I mean, if you could broadcast one message to the world, what would it be? Are these folksonomies helping in filtering and distributing this information, or are we just ending up on our same disconnected islands of information we started from.

I am thinking of the disjoint sets of books that liberals and conservatives read, but there must be many other examples - perhaps the entire blogosphere falls into this category. One thing I have realized as I begin to rely more and more on my rss client, is that once I am lost inside of it, if you aren't syndicating a feed, you don't exist.

I am quite aware that a full-blown information war is currently underway. The existence (and adoption) of Flickr allow me laugh at the Bush administrations attempts to prevent the publication of Katrina's casualties, but how did this story get swallowed up?

If bittorrent didn't exist (or was outlawed) and we could not reclaim the "lost" bandwidth of individual broadband subscribers, large file transfers and exchanges would probably have to be mediated through centralized bandwidth providers like akamai or cisco. But this is not quite as simple as centralized vs. decentralized publishing models, since that is only half the equation. The information retrieval needs to happen on the other end, or else you're screaming into an abyss.

I was once lucky enough to find myself in a conversation with the author of citeulike. I casually inquired as to whether he was planning on releasing the engine which powers his site under an open license. He replied that he would, but that it would be a bad idea. citeulike is supposed to be a service, not a product. Its value is actually diluted the more there are that are running. Part of flickr or delicious' power are in their popularity. They are much more effective the more users they have, leaving us once again in a paradoxical quandary, where we need a decentralized, centralized service.

Too many flickrs, and they are all rendered weaker, and too few, and we are back in a situation where our information is in danger of being homogenized, controlled, and filtered.

Sunday, September 25, 2005

Is anyone watching grandma?

On Friday I had a chance to meet with a group of Artificial Intelligence researchers at Carnegie-Melon university. They demonstrated a working technology, Informedia, which I would have guessed was at least 3-5 years off.

What was most incredible about this demonstration was the vivid observation of the trenches in which the information war is being waged. Like any power, technology can bend towards good or evil, and as this post points out, Social Software can be understood as the purposeful use of technology for the public good.

The surveillance possibilities that machine based processing of video and film affords is mind-boggling and horrifying (for more on this angle, see my bioport papers). At the same time, the kinds of research, machine based assistance, and even the ways in which this kind of technology would change journalism, could all be harnessed for the public good.

Is transparency, openness, and free culture our best bet for steering and harnessing these powers productively?

Thursday, September 22, 2005

Adventures in Wien

I apologize for this study blog's late start - I just returned from the Plone conference in Vienna, and the internet availability was spottier than it should have been.

At the conference I presented a talk which relates closely to the topic of this seminar, entitled Platonic Wikis and Subversive Social Interfaces. People seemed very interested in the subject, and a common response was that these ideas were obvious when stated, but people were very happy to hear them concisely articulated and formulated.

I will be posting my slides up on the conference site, but in the meantime, here is a working link to them: html ppt Photos and links from the conference should start appearing under plonecon2005 over the next few days.

I will be catching up with ss05, blog postings, and sleep this weekend.

Techno-Bio:

I have an extensive background in software architecture, design, and development. Prior to joining the center, I was the lead developer at Abstract Edge, an interactive marketing firm which serviced both non-profit and corporate clients. I was also a senior developer at MaMaMedia, a children's educational Web site. I am an active open source contributer whose technical interests include Linux, Python, and Content Management.