Miskatonic University Press

AWE 2014 Day Two

ar conferences

Thursday 29 May 2014 was the second and last day of the Augmented World Expo 2014. (Previously: my notes on the preconference day and first day).

Since no one was interested in libraries at the Birds of a Feather session the day before, this time I decided to try FOSS. I grabbed a large card and a marker at the table outside the room. “What’s your table about?” said a fellow sitting there. “Free and open source software,” I said. “I really like SketchUp,” he said. The silence went on uncomfortably long as I tried to figure out what to say, but then we agreed it was an impressive tool, and that designing anything more than a cube could be pretty hard.

Tables about oil and gas, health, manufacturing and venture capital filled up, but no one came to talk about free software, though an English literature professor took pity on me and we had a great talk about some projects he’s doing.

The opening keynote was by Hiroshi Ishii, the head of the Tangible Media Group at MIT’s Media Lab. It doesn’t seem to be on YouTube, but he gave the same talk at Solid eight days before. He’s working on Radical Atoms: “a computationally transformable and reconfigurable material that is bidirectionally coupled with an underlying digital model (bits) so that dynamic changes of physical form can be reflected in digital states in real time, and vice versa… We no longer think of designing the interface, but rather of the interface itself as material.” That doesn’t come anywhere near to describing how remarkable their work is.

The talk was an absolute delight.

The Inform especially amazed me. Watch the video there.

Next was something completely different that turned out to be bizarrely strange: Tim Chang, who works at the Mayfield Fund, a venture capital firm, where he is “a proven venture investor and experienced global executive, known for his thought leadership in Gamification, Quantified Self, and Behavioral/Social Science-led Design,” with a talk called “Building the Tech-Enabled Soul.” Here are some quotes.

“Do you remember the hours you spent crafting your characters [in role-playing games]? You’d take all the traits you wanted to min/max, whether it was strength vs dexterity or intelligence vs wisdom, you’d unlock skill trees and you’d think about how you want to architect your character. I remember when I was in my twenties, it kind of hit me like a blinding flash of the obvious: how come we don’t architect our own lives the way we spend all that time crafting perfect little avatars? That kind of opened me up to this idea of gamification, which is that life itself is a grand game.”

I was shaking my head at this, but then he talked about how thinking of everything in life as a game led him to the quantified self:

“I’ve been body-hacking for a couple of years.”

It was about here that I stopped taking notes and just stared in amazement.

“Prosthetics for the brain. Sounds kind of crazy but we’re already there.”

“How many of you saw the movie Her? I thought that was pretty fantastic.”

At 13:23 it took a wonderful turn to the weird: he got into Borges and The Library of Babel. I was not expecting this. Or any of what followed.

“They were able to extract the smell and the colour of cherries and bananas, which are pretty well known DNA chains, and they were able to mix them into E. coli and other types of organisms, such that you could feed it to your pet and your pet would produce really pleasant-smelling and -coloured excrement.”

“There is a polo horse breeder in Argentina … [who will] use IVF and go create lots of attempts at mutant super polo horses.”

“I want to create the Library of Babel in every possible chain of As, Cs, Ts and Gs.”

“Perhaps that’s what the universe is, a giant kind of Monte Carlo experiment running every possible permutation of all these sorts of things in all configurations, and that there’s many universes running this experiment over and over again.”

“The next step is that as the actuators begin, as we are able to hack our wetware, as we understand our neuroelectrical interfaces of our bodies, I think our fate is to really transcend our physical form and really go beyond that, and figure out what it is that we want to become if all states are possible. Again, this is sounding really really out there.”

The bit on Korean plastic surgery at 17:47 you’ll have to see for yourself.

“I think our memories are going to be up for grabs as well.”

“What I think the holy grail is, is can these technologies be used as an engine for perfect empathy. I don’t want you to tell me what it’s like to be you, I want to know exactly what it feels like to be you.”

“As we become that networked consciousness that everybody talks about, we will become this sort of more global interconnected species. My own theories are that at that point we’ll be welcomed to the club and there will be lots of other species who have way transcended and say, ‘Welcome to the club!’ And then we’ll all link and do that at a galactic inter-species scale and on and on and on it goes.”

“Are we a particle, are we a wave? Property rights vs open source?”

Twenty-five minutes of jaw-dropping astonishment. I read a fair bit of SF. We live in SF now. I just wasn’t expecting to hear this from someone at a company with hundreds of millions of dollars on hand to try to hurry us along to the Singularity.

Next I went to the panel on designing hardware for the interactive world.

Jan Rabaey of the SWARM Lab at UC Berkeley (requires cookies) talked about ubiquitous computing, when devices are all around us and all connected and form a platform. (Continuing on from Chang, this reminded me of Vinge’s localizers and smart dust in A Deepness in the Sky.) Wearables are here in a beginning way. But there’s a problem: stovepiping. Every device a company makes will only talk to other devices from that company. It’s short-term, incompatible and unscalable. (And one thing that made the internet spread so widely is that it works the opposite way.) His solution is the “human internet:” devices on, around and inside you.

This interview with Rabaey goes into all this more, I think.

Interviewer: How exactly do you deliver neural dust into the cortex of the brain?

Rabaey: Ah, good question.

He recommended Pandora’s Star by Peter Hamilton.

Next was Amber Case (the director of ESRI’s R&D Center) on Calm Technology. It’s quite entertaining and pleasantly sarcastic, like the bit about “the dystopian kitchen of the future.” Worth watching. She has made The Coming Age of Calm Technology by Mark Weiser and John Seely Brown available on the calm technology web site she’s just launched.

How can AR be used in a calm unintrusive way? Ads and notifications popping up everywhere is exactly what I don’t want. In fact an AR version of Adblock Plus, to hide ads in the real world, would be very nice.

The last session of the morning started with “Making AR Real” by Steven Feiner, who teaches at Columbia and was another of the top AR people at the conference. If you haven’t read his 2002 Scientific American article Augmented Reality: A New Way of Seeing then have a look.

He showed some eyewear from the past few years, and then what we have now: current glasses are “see-through head-worn displays++,” in that they’re displays with at least one of an orientation tracker, input device, RGB camera, depth camera, computer or radio. Why didn’t all that come together in 1999? The pieces were there … but nothing had the power, and there was no networking possible (no fast mobile data like we have now) and hardly any content to see. The milieu was missing then, but it’s here now.

Idea: collaborative tracking. People could share and accept information from their sensors with people around them, because everyone will have sensors.

How will we manage all the windows? Systems now look like they did 40 years ago, or more, back to the Mother of All Demos. He and colleagues had the idea of environment management: “AR as computation æther embedding other tracked displays and the environment.” Move information from eyewear to a display, from a display to eyewear, or between two displays and see it midway on the eyewear. “How do we control/manage/coordinate what information goes where?” Where do we put things? Head space, body space, device space, world space? This needs to be researched. (How to keep all this calm, too, I wonder?)

Right after Feiner was a former student of his, Sean White (@seanwhite) who now works at Greylock, another Silicon Valley venture capital fund, but this talk (“The Role of Wearables in Augmenting Worlds”) was practical and level-headed, nothing like Chang’s. He gave six things to think about when building an AR system:

  1. Why is it wearable/augmented?
  2. Microinteractions (mentioned Dan Ashbrook and his dissertation): how long to do something; 4s from initiation to completion
  3. Ready-to-hand vs present-at-hand (Heidegger, Winograd): example of holding a hammer and nailing in a nail
  4. Dispositions (Lyons and Profita): pose and relationship of device to user; resources; transitions cost; wearing Glass around the neck
  5. Aesthetics, form and self-expression (Donald Norman): emotion design, visceral, behavioural, reflective. Eyeglasses reflect who we are.
  6. Systems, not just objects: not just the artifact, but how it relates to other things.

I wasn’t expecting Heidegger to come up. Here I was at a high-tech conference where a good number of people were wearing cameras, but Borges and Heidegger were mentioned before Snowden.

There was more that day, including a fairly poor closing talk and panel and one of the most memorable musical performances I’ve ever seen, but I’ll leave it that for now.