douggage@san.rr.com,  Jim.Gemmell@microsoft.com, jain@ics.uci.edu, thad@cc.gatech.edu


Mark Podlaseck, IBM T.J. Watson Research Center, 19 Skyline Drive, Hawthorne, NY 10532 podlasec@us.ibm.com



FROM our current perspective, it seems natural to regard the electronic chronicle as some not-too-distant phase in the evolution of blogs. Rather than require, as a blog does now, long hours laboring at a keyboard finding the right words to describe an experience, electronic chronicles (or eChronicles) promise that our mere presence at an event might soak up and fully document the experience in all its richness. This episode can be subsequently examined and reexamined, not in text punctuated by amateur graphics, but in increasingly high-definition media, which contribute to a simulacrum, not only of this experience, but of the totality of one’s own experiences, as well as those of one’s family, friends, colleagues, and possibly even total strangers.

This evolution of the blogosphere to an experience simulacrum, while driven by clear-cut factors such as ease-of-use, convenience, completeness, accuracy, and simple usefulness, nonetheless may be poised to shift our relationship with the world. Almost thirty years ago, the critic Susan Sontag convincingly argued that photographs have a fundamentally different relationship with the reality that they purport to represent than does text. Her reasoning went something like this: Print is linear, and frequently shapes a narrative with a beginning, a middle, and an end. Photography is antilinear; as a slice of something larger, it is skeptical of conclusions, endings, and even stories. Print incorporates a point of view, and involves analysis and interpretation. Photographed images are not so much statements about the world, but rather pieces of it; they are “miniatures of reality that anyone can make or acquire….To photograph is to appropriate the thing photographed. It means putting oneself into a certain relation to the world that feels like knowledge -- and, therefore, like power.”

As text-based blogs embody many of the qualities Sontag ascribes to print, so do electronic chronicles appear to inherit and then magnify many of the qualities she sees in photographs. This panel will focus on the anticipated evolution from our current text-heavy blogs to the rich media made available by electronic chronicling technologies, and the consequent personal and social changes that this might bring about. As we capture more and more experiences, will the ways we regard these experiences change? What will happen to the way we recount our experiences to others? Will our social behavior change? What about our personal behavior? Will some public behaviors become more private as they are captured and some private behaviors become more public? What will privacy mean in the context of these technologies? How will our relation to our exploding archives of digital captures change? How dependent will we become on them for simple tasks like locating the car? Will an economy of captured experiences emerge? If so, how might it function?



Doug Gage is an independent consultant based in Arlington, VA. From 2000 to 2004 he was a Program Manager in the Information Processing Technology Office (IPTO) at DARPA, where he managed the MARS and SDR programs in robotic software and co-managed the Bio-Info-Micro Program. At DARPA, he also formulated the episodic memory LifeLog program, which attracted much attention but ultimately did not receive funding. Prior to DARPA, he worked in robotics for many years at SPAWAR Systems Center San Diego.  He holds a Ph.D. in Physics from Arizona State University.



Jim Gemmell is a researcher in Microsoft's Next Media research group. His current research focus is on personal lifetime storage, as architect of the MyLifeBits project and chair of the First and Second ACM Workshops on Capture, Archival and Retrieval of Personal Experience (CARPE). Dr. Gemmell received his Ph.D. from Simon Fraser University and his M. Math from the University of Waterloo. His research interests include personal media management, telepresence, and reliable multicast. He produced the on-line version of the ACM 97 conference and is a co-author of the PGM reliable multicast RFC. Dr. Gemmell serves on the editorial advisory board of Computer Communications.


Ramesh JAIN

Ramesh Jain is an educator, researcher, and entrepreneur.  Currently he is the Donald Bren Professor in Information & Computer Sciences at University of California, Irvine.  Before this he was a Farmer Distinguished Chair at Georgia Institute of Technology.  Ramesh is a pioneer in multimedia information systems, image databases, machine vision, and intelligent systems. While professor of computer science and engineering at the University of Michigan, Ann Arbor and the University of California, San Diego, he founded and directed artificial intelligence and visual information systems labs. Ramesh was also the founding Editor-in-Chief of IEEE Multimedia magazine and serves on the editorial boards of several journals in multimedia, information retrieval and image and vision processing.  He has co-authored more than 300 research papers in well-respected journals and conference proceedings.  He has co-authored and co-edited several books. He is a Fellow of ACM, IEEE, AAAI, IAPR, and SPIE, and is the Chairman of ACM SIG Multimedia. Ramesh has recently co-founded SEraja to address needs of emerging EventWeb



Thad Starner is an Assistant Professor in Georgia Tech's College of Computing, where he founded and directs the Contextual Computing Group. Thad holds four degrees from MIT, including his PhD from the MIT Media Laboratory in 1999. Starner was an Associate Scientist with BBN's Speech Systems Group in 1993 when he created one of the earliest high-accuracy on-line cursive handwriting recognition systems. Starner is one of the pioneers of wearable computing and has authored over 100 scientific publications and book chapters in mobile computing, human computer interaction (HCI), computer vision, augmented environments, and pattern recognition. Starner co-founded the IEEE International Symposium on Wearable Computers (ISWC) and is one of the founding members of the IEEE Technical Committee on the subject. His work includes a gloveless, real-time sign language recognizer; various intelligent agents in support of everyday memory; theoretical frameworks for power generation and heat dissipation for wearables; several augmented realities; and a computer-vision based interactive graphics workbench for which he received a "best paper" award at VR2000. Thad's current work researches the use of computational agents for everyday-use wearable computers.