Thursday, March 6, 2014

Hardware Hacker Culture of New York

My co-founder Kent Rahman and I kicked off the Hardware Hack Lab about six months ago now - and the energy just keeps building and building.

Those of us running the lab take our inspiration not from technology but from people and interactions. We believe that innovations gain most of their richness and momentum from the cultural context of their creation.

The lab yesterday evening
The scene at last week's lab

Therefore, our goal is to convene a regular meeting space of openness, breadth, and inquiry - and good times.

In this post I want to outline a few of the key insights that have really made the weekly lab come of age. The lab now unites a diverse crowd of artists, technologists, researchers, and other interested parties. For us, this cross-disciplinary culture is what it is all about.

Hardware, yes - but ultimately people in a space
We are lucky enough to have ThoughtWorks sponsor us with the use of the amazing Gallery space in NYC every Wednesday.

But what to do with that space? Our thinking is this:

This is a city, like many, in which conversation takes place every day about the recent-and-adjacent possible. The most energetic form of that conversation comes in the act of doing, and sharing of knowledge among early-adopting practitioners.

So let's issue an invitation, to have that doing-conversation continue once a week under our roof.

Exploring a custom virtual environment

Time spent in the lab is not for us about formal projects, or project-managed delivery of prototypes (although prototypes can and do emerge).

The real goal is a public space for immersion in the ongoing creative technology conversation.

We ask ourselves:

What are the memes, themes, practices and occurrences going on out there? Can we try them, and learn by doing?

Kent's hands

What types of things can now be done that couldn't be done before? How are they different, who is doing them and why?

We are a small, dynamic space, but the currency of these sessions is in flow of knowledge, information and ideas. It is more by a subjective read of this flow that we might judge ourselves than by anything else. The best way to be immersed a fast-moving and complex technology landscape is to help facilitate and curate the conversation.

Facilitation by mood
When Steven Dale came onboard to help us run the lab, he quickly pointed out what we were missing to push this flow even further. Here's Steve introducing the lab:

We had initially been running the show with a bunch of desks in the corner, under standard fluorescent light. For Steve it was obvious - to build energy we must carefully craft a mood:

"My theory is that light and sound transforms the mood of a space significantly, and should match cycles of activity with the outside world. I think we can replicate this here, cheaply - for hardware hack lab, and possibly every night for people to be in the space to work, under a different mindset."

Lighting, sound, spatial relations and even a lo-fi hardware hack playlist have since emerged. Our events now have the same rhythms and crescendos of a good party - a party that just happens to be flooded with creative people, projects and passion to explore.

A vital partnership - the Volumetric Society
For us to really achieve the mix we were looking for, it was vital we connect with people outside our existing networks.

We started our partnership with the Volumetric Society because they already had a lively, cross-disciplinary, maker/doer-oriented community of participants who attended workshops and presentation-oriented meetups around the city. They also already had a strong emphasis on hardware and it's use in innovative and creative contexts.

Volumetric logo
Volumetric Society of New York

What we offered Volumetric was an addition to their agenda - a regular learn-by-doing session, an opportunity to get participant hands dirty outside the confines of irregular technology-specific workshops they already run.

It turns out it's a small world, and many of our early participants, such as OpenBCI, were already influential in both groups. As part of the whole exchange I have started co-curating the Volumetric Society's event agenda now that founder Ellen Pearlman has begun her doctorate in Hong Kong - so there's a strong bind and we are well-aligned.

A nice storm
So it works - a perfect storm of circumstance, mutual needs and the right energy has created an energetic, vibrant and wholly unique meetup space in midtown New York.

RGBD render 1
RGBD render 2
Renders from depth video shot in the lab

To give some examples - last night I showed a film director, an interactive artist & beat-boxer, and an immersive application developer how to build and record with an open-source depth video toolkit (the renders above are also from that toolkit).

In another corner, Kent showed a crowd of participants how to use Unity3D to design immersive environments for Oculus Rift, while Steve paired with a collaborator on a physical computing project based on a foot-pedal interface.

The lab one week

A couple of Raspberry Pi enthusiasts hacked on the debian-based Pi operating system, while another participant came in and for the first time tried Google Glass.

The atmosphere is always casual
This is a place where people come to meet and hang out, to form connections as well as learn new skills. Our rule is, no particular knowledge required. You are welcome to come look over people's shoulders, try if you feel adventurous, or lead if you want to share your interests.

I want to drive home this point about our philosophy on project-management:

We deliberately avoid setting expectations of particular outcomes. You can't expect production-level technology development to happen in an evening, so don't try.

Instead, do the opposite - and gain the opposite gains. Remove the constraints of 'business objectives' and open the door to serendipity, immersion, and the fun of not knowing what's going to happen or who you are going to meet next.

Strategic collaborations inevitably emerge, and you can continue them in the lab, and/or 'take them offline' (continue them outside lab hours). We've had creative partnerships emerge, and even the formation of proposals for funded projects.

But as good as that is, this isn't about chasing your tail - we start and finish with culture, and work forwards from there.

By 6:30 in the evening the only currency you have left anyway is the inertia from your imagination and your creative passion. Weekly lab time is about spending that currency together, and gaining much more from it than you are likely to gain alone.

Monday, February 10, 2014

Future Visions for Human Interaction

Over the course of an hour at ThoughtWorks last week Ken Perlin described a vision for the future of immersive human interaction.

It was rich and varied subject matter, and it drew a line from Ken's early inspirations right through current research and beyond.

The Internet Society's Joly MacFie was on hand to film it (above), but I'll summarize the thrust of the argument here so you can get a sense. I also want to sprinkle in a few of my own comments and reactions.

What this talk is really about
Over the course of the hour, Ken weaves through a range of subjects including narrative, immersion, imagination, creativity, shorthand pop culture references to the 'future', and human nature - already a lot for one hour.

Add to that a range of technology subjects - wearables, implants, depth & holography, virtual & augmented reality, machine learning, kinematics and software programming - and you find yourself with plenty of rabbit holes to go down.

However, the real vision is all here in this 1-minute clip. In the video below, Professor Whoopee helpfully explains the functioning of a CRT using his 3DBB - his 3-dimensional blackboard (ofcourse). Check it out, and particularly watch how Professor Whoopee uses the 3DBB to communicate and interact with the other characters around him:

The 3DBB is like an immersive environment, in which Professor Whoopee can create and operate a virtual, functioning CRT, which he and the other characters around him can all have a shared, volumetric experience of.

Bearing in mind that context, have a look through a transcript of Ken's closing statements to the audience last Tuesday:

"Eventually, when you and I are face-to-face in an augmented version of reality, and we have nothing but our bodies and our eyes and our hands and each other... then we'll be able to use these very very simple techniques, because we've understood the semantics of how I create something for you in realtime. And what I can offer is this library [of intuitively instantiated, intelligent and directable, interactive yet autonomous virtual objects]..."

"Then the virtual world we have between us becomes something that is not just a replication of our physical reality, but actually the kind of reality we'd really like to be in"

In this vision, we are able to spontaneously create and manipulate logically-consistent virtual objects, or we might say 'directable actors' since they are semi-autonomous. These object/actors surrounding us are not impositions on physical reality. As a consequence of immersion, they are indistinguishable, an effectively inseparable component of reality itself.

The actor/objects are viewable and interoperable in the space between multiple people, similar to how we currently imagine future holographic interfaces.

But we don't interact with these objects by using a portion of space which we would identifiably define as being an interface. The net effect of immersion combined with shared experience is that the virtual-physical reality we inhabit precludes the need for an interface.

All of it, or rather none of it is the interface.

This versus other visions
I think a lot of people will find that hard to imagine. One way to try and imagine it is to contrast it with other visions of the future.

In his Brief Rant on the Future of Interaction Design, Bret Victor takes a decent swipe at the vision presented in this video:

Bret's criticism is to note that the characters in such visions are immersed in an experience which can take on any shape or form (so long as that form blends the possible characteristics of virtual and physical worlds).

And yet to interact with this immersive reality, they turn to their hands and manipulate a little virtual 'phone'.

Interacting with a contrived interface (credit)

It doesn't make sense.

A real, modern-day phone is like a glass window which you can swipe and prod, and there isn't a great deal of immersive haptic feedback.

Compare that to your intuitive sense of your place in a book by the relative density of pages in each hand. Or the amount of water by the shift in weight distribution as you tilt a glass.

Interacting with a book    Interacting with a glass
Interacting with seamless, integral interfaces (credit)

Natural human interactions are richly physical, and both Ken and Victor point out that we are currently going through a very odd transitional stage - walking around with our heads facing down, glued by our eyes and fingers to screens. Visions for future human interaction should allow that as soon as our dependency on physical devices for virtual interaction goes, so too goes that framework of interaction.

If no interface, then what?
Ken's answer is that the spontaneous creation of shared virtual actors will become a new staple of human communication, in much the same way that the ability to instantly communicate by video with people the other side of the world became a staple of human communication.

These actors will be scriptable and responsive to their environment, much like real actors on a stage. But hang on - we've already seen this type of thing recently haven't we?

The big difference between this and Ken's vision is that in the PS4 you enter into a contrived interaction with a specific subset of actors and scenarios. Because the hardware is not a seamless, integral element of your everyday experience, the user-experience case for being able to instantiate your own actors at will is much weaker.

This begs another question. If the interactions in Ken's vision are not to be contrived, doesn't this rely on each individual user to craft and nurture their own individual libraries of symbolic 'actors'? Or do most people subscribe to commercially available 'packs' of actors, and combine them to curate their own unique library?

Questions start flooding in. In what way does a library contribute to, or become reflective of, a person's personality? And tangentially, if these actors can't provide haptic feedback, are they really so different from the Minority Report interface?

Perhaps a further modification. What if it is the ability to transfer virtual characteristics to physical objects, and have those physical objects respond in a haptically-meaningful way that will provide the most engaging experience?

Yes, great conjecture - but is any of this even possible?
Ken spends most of his talk explaining how far he and his students have come, and how he expects technology to develop in the near future. I'd recommend watching if any of this interests you.

We will keep an eye on Ken's work over at Volumetric and catch up with him again to explore these questions in the future.

Tuesday, January 7, 2014

OpenBCI Nears it's Kickstarter Goal

With just two weeks to go, OpenBCI is closing in on it's $100,000 Kickstarter target.

They've been featured in Forbes, Fast Company, CNet, and ofcourse the ultimate accolade - they've worked with us over at Hardware Hack Lab!

In the run-up to the hackathon we hosted at ThoughtWorks I quizzed Conor on the purpose of OpenBCI. I think his answers bear repeating here.

First, what is OpenBCI?

"OpenBCI is a new open-source initiative. Our mission is to promote brain-computer interface (BCI) research in a transparent atmosphere, by putting the technology in the hands of the people. We have built a BCI prototyping platform that is entirely open and supported by a growing community of hardware and software engineers and makers."

Why OpenBCI, why now?

"OpenBCI has no proprietary algorithms. The barrier of entry is slightly higher than existing commercial BCIs because the software frameworks are still in their infancy. That said, as the open-source community adopts OpenBCI, we hope that the barrier of entry rapidly broadens, allowing makers of all skill levels to begin doing research and development surrounding the human brain and body."

Can you explain the differences between what you are doing and what other open source players are doing, for example OpenEEG?

"Primarily, we are trying to lower the barrier of entry into research-grade EEG. OpenEEG is an amazing initiative, but the platform can be a bit intimidating to newcomers. Most contributors to the OpenEEG project are well-versed in electrical engineering and have the aptitude and/or training to get down and dirty with datasheets and circuit diagrams. With OpenBCI we are looking to provide the open-source community with more flexibility than commercial devices like NeuroSky & Emotiv (which have fixed electrode configurations), while at the same time keeping the barrier of entry low."

Go on.

"In a sense, we want to create the Arduino of BCIs. Arduino made electronics prototyping easy and accessible for everyone from electrical engineers to grade school kids. It's our hope that OpenBCI does the same thing, but for synthesizing the digital signals of the human body. The OpenBCI hardware that is going to be kickstarted will most likely have an integrated microchip, making OpenBCI a programmable BCI with no set electrode configuration - a perfect tool to let the open-source community figure out what non-invasive BCIs are capable of."

"I hope that clarifies our mission a bit."

Yes, it does, thank you!

Friday, December 27, 2013

Sound Chamber (2013)

What happens when you use a human body to control a sound synthesis environment?

That was the simple question Alex Hornbake and I explored when we put together Sound Chamber, a quick and dirty installation at ThoughtWorks Hardware Hack Lab.

This was all about the interaction. We spent a lot of time getting as smooth as possible Kinect interaction with a couple of participants. We then threw the values over the network and into a SuperCollider patch which busses them to custom-built synths.

We wanted to discover the nature of using the human body as a controller, and ended up tracking three types of movement:

  • Distance of hands from each other
  • Combined distance of hands from the floor
  • Distance from the sensor to back of the chamber

All the code is up on Github along with instructions to replicate. We feel pretty good about adding this mode of interaction to our toolbox and plan to experiment more in the new year.

Wednesday, November 20, 2013

OpenBCI Hackathon at ThoughtWorks

This weekend we held the first OpenBCI (Brain Computer Interface) hackathon at ThoughtWorks. We went from Saturday morning to Sunday night working on challenges of design, community, hardware and software.

Conor Russomanno gives a crash course in EEG
Conor Russomanno gives a crash course in EEG at the hackathon

We have been working with the OpenBCI guys since they started attending our weekly Hardware Hack Lab, also out of the ThoughtWorks New York office.

OpenBCI are Conor Russomanno and Joel Murphy. These guys are serious about lowering the barrier of entry to research-grade EEG. Commercial sets like Neurosky do this but tie your hands in the process. OpenBCI is about making a viable open source alternative.

Aisen Caro poses with a headcap
Aisen Caro poses with a headcap

We had four sets of caps and hardware, and over the course of the two days different volunteers wore the gear so that we could test on real people.

Here's a short video of Joel demoing the application of a cap.

There was a good buzz of excitement and energy and we would definitely host an OpenBCI hackathon again. Looking forward to the next one.

Friday, November 8, 2013

Exploring Depth Video at CultureHub

A short video of me speaking at CultureHub. If you are interested in the Volumetric Lab RGBDToolkit meetups, this is the kind of work we do:

Some more info on the lab:

The Volumetric Lab at CultureHub is an open source, member driven community dedicated to exploring 3D interactive software and hardware. Artists, innovators, and educators engage with interactive motion sensing technologies such as the Kinect and other depth sensing cameras to produce an array of research and experimental art projects. Participants are strongly encouraged to formulate projects with the open source philosophy in mind. We believe that free distribution and access to project development promotes innovation and empowerment.

Wednesday, October 30, 2013

Introduction to iBeacons

This post explores iBeacons. It may have been announced only quietly, but it is going to have a big impact on how we perceive computing, and on the so-called Internet of Things.

iBeacons technology is being promoted as a service, particularly for innovative brick-and-mortar retailers to differentiate via micro-location, context-aware apps and analytics. However the technology has an equally powerful reductive effect on privacy, for those who choose not to be conscientious and self-impose limits.

This post focuses on the technology. I'll be covering what an Estimote Beacon is, what other types of Beacons there are, and how you can make your own. I'll go on to talk about how iBeacons is giving NFC a run for it's money.

The Estimote Beacon elevator pitch

This post also disassembles the overloaded term iBeacon, which can be a little confusing at first when you hear it used to describe many different pieces of a larger puzzle.

A Word on Privacy
I've focused mostly on technology in this article, but I want to take a moment to comment on privacy. iBeacons make it possible to easily engage with people in a physical space via their mobile devices. But part of what will make the experience so compelling is the ability to triangulate the precise physical position of each participant. And as long as the person has Bluetooth enabled and your app installed, it will be possible to do this without their permission.

The old adage fits here, that just because you can do a thing doesn't mean you should do a thing. Permission is key. It's not just on moral grounds, but on grounds of building trust. If I enter into a space where beacons are active, I want to be told what value I can expect, and what data I am giving away to get that value. Once I opt in, I am engaged. If I choose not to opt in that should be respected.

There is also a distinction between using real-time data and storing real-time data. But ultimately this technology is out there right now, and people are going to start using it. We must be aware of privacy concerns and go ahead and learn about the technology. That's what this article is for.

What is an Estimote Beacon?
An Estimote Beacon is a device made by a Polish company that utilizes a newly-available mode in Bluetooth called Bluetooth Low Energy (BLE). Later in this article I will describe the iBeacons service on your phone, and alternative Beacon services, but first let's focus on the now well-promoted Beacon called Estimote, and the interactions it can provide.

In the picture below, the Estimote Beacon is broadcasting using BLE. The phone receives the broadcast information over it's own BLE, and the beacon-enabled app recognizes the user's proximity to the beacon. In this case, the beacon is positioned near a bar, and the user is offered a discount on a favorite drink. The favorite drink is identified by the user's purchase history, retrieved via the app.

As the name infers, BLE communicates in a similar manner to regular Bluetooth, but consumes much less power, meaning that the Estimote Beacon can run for 2 years on a tiny coin battery. If your phone supports BLE, and Bluetooth is enabled, beacon-enabled apps can work out your proximity to a beacon. If there are several beacons, the app can use the relative strength of the beacon signals to work out your precise location. The app can use this information to send push notifications and deliver context-relevant experiences to your phone.

An iPhone connecting to an Estimote Beacon
An iPhone connecting to an Estimote Beacon

The canonical example is a retail store and merchandise stands. When people enter a space or visit certain stands they can be sent targeted information - text and small images, or linked to online rich media such as video or sound. This could be promotions, coupons, recommendations, marketing or informational content, and if there is an app running on the shopper's device and they are logged in, this can be personalized.

There can also be mashups, for example the micro-location equivalent of Google Maps. Users can search for a particular item and be guided to it's precise location, across the shop floor, up the elevator and so on. Ultimately, this is an early-stage technology and the possibilities are open.

Potential retail applications for iBeacons
Potential retail applications for iBeacons

There is also the possibility for those with logged-in accounts to have contactless payment on leaving the store, since we know exactly when users are leaving. More on an alternative way of doing that with a PayPal dongle below.

Analytics
One other feature which will be coveted by many retailers is the ease with which you will be able to track visitor's precise movements through a store. This brings web-style analytics that much closer to physical retail, as we are able to get quantitative data on more and more details of people's interactions with store merchandise.

As discussed earlier use of this feature should be approached openly and in an opt-in fashion. Notice the people in the 'Proximity Marketing' section of the image above? These interactions should be handled delicately.

What is/are iBeacons?
Bluetooth Low Energy (BLE), mentioned earlier, is part of the Bluetooth Specification 4.0 (aka Bluetooth Smart). This flavour of Bluetooth is now available on many phone and tablet devices, and has been baked quietly into iOS7. Most new devices will be BLE compatible:

"The majority of new devices entering the market, including the HTC One, Nokia Lumias, Samsung Galaxy Nexus and the Blackberry Z10 and Q10, among others, are all BLE compatible. In terms of iOS devices, the iPhone 4S and above, the iPad with Retina display and the iPad mini are all BLE compatible."
Source - Mubaloo

This feature of Apple devices and the services that go with it have been named iBeacons, in classic Apple 'iThing' style. But the term iBeacon has become overloaded. I have seen it used in the following contexts:

  • a generic term for a physical beacon in real space, such as an Estimote Beacon
  • the software platform in iOS7 which allows Bluetooth 4.0 hardware to use BLE and talk to beacons
  • the overall service, including the two cases above as one (referred to plurally as iBeacons)

And that's just with iOS, where the term originated.

Who else is in this Space?
Following closely behind Apple are Android equivalents to all of the above, and many times the naming includes the term iBeacon in a me-too fashion. Radius Networks, a start-up out of Washington D.C. offer physical iBeacons ($99), and an Android SDK to allow developers to write apps for compatible devices, which they refer to as the Android iBeacon Service.

The physical iBeacons by Radius are just a customized Raspberry Pi, which tells you something about the nature of the service - all you need to get going with this is a Bluetooth 4.0 dongle and something to run free software on.

A San Francisco-based startup called COIN offers an Arduino-based beacon for $22. In the same vein, you can make your own Beacon out of a Raspberry Pi and a Bluetooth 4.0 dongle.

PayPal have a strong presence in this space, having developed a Beacon-as-USB-dongle. You guessed it, this is just a Bluetooth 4.0 dongle for a retailer's laptop, with some bundled PayPal software. From a customer's perspective:

"By checking in to a store à la Foursquare (you can even configure the app check you into places automatically), that retailer has access to the funds in your PayPal account and you can pay for your items directly with that money. It's proximity-based, so you do have to be physically present at the store. The security check happens when the retailer is shown a picture of your face to make sure that you're who you say you are. With that confirmed, your total purchase is deducted from your PayPal account."
Source - Business Insider

And there is one more iBeacon. It's a phone accessory on kickstarter which was unfortunate enough to have labeled all their hardware with the iBeacon name. And has just had that name usurped...

What are Virtual Beacons?
An Estimote Virtual Beacon is a free iOS app that turns your iWhatever into an iBeacon using it's existing BLE hardware (app released Sep 19). With it, you can set up one iPhone as an iBeacon, position it, and use it to track another. A whole iPhone is a bit heavy to serve as just a beacon, but it's useful for testing and playing with the SDK.

Of course, hot on the heels is an Android version (Oct 17), called iBeacon Locate, also by Radius Networks. To use these apps your phone will need to support Bluetooth 4.0, as described above. In the case of Android, this means you will need to be running Android 4.3.

You can also run an iBeacon on a virtual machine, although I'm not sure yet why you would want to do that...

What are the Implications for NFC and RFID?
NFC is RFID's younger cousin, which offers contactless payment and data exchange at short (4cm-ish) distances. Apple stalled for a long time when it comes to including NFC reader hardware on their devices, to the frustration of NFC advocates. On the other hand, many Android devices support NFC and for a while there was a question as to why Apple wasn't jumping on board.

Now we know. Apple is betting on BLE, because as well as winning on cost and range, BLE is already cross-platform. Apple have played their hand against NFC.

But RFID is bigger and more established than NFC. And it does things that iBeacons doesn't do. Humans may carry phones but physical objects don't! In-store merchandise, warehouse stock, any physical objects will be better served by RFID tags. Merchandise tracking is still a key area for RFID, and that translates in-store too. When I pick up a shirt and carry it to a changing room, this interaction will not be visible to any iBeacon.

If you want to use technology to offer participants a full experience including smart interactions with objects beyond localized areas like merchandising stands, you may want to consider a mixed solution involving RFID and iBeacons.