Sunday, June 29, 2014

John Cleese on Creativity

This is a really well-crafted talk on creativity. In it, John Cleese cites academic research and his own personal experience to illustrate some very practical advice on how to become more creative.

I've been working recently on creating my own space/time oasis inspired by these ideas. To pay back a little I'd like to offer my notes as a quick-reference guide. I do this sometimes - it will give you a refresher on the talk without having to dive into a full transcript. Enjoy.

Creativity is not a talent
Unlike an affinity for drawing or problem-solving or maths, creativity is not a talent that you either have or don't have. It's a way of operating. It's a mood you get into when the conditions are right.

It doesn't matter what your skillset is, or your education. According to Cleese, the most creative people are simply those who have mastered the ability to transition to and from this mood at the most appropriate moments, and to hold it for periods of time.

The closed mode
Day in, day out, most of us spend our working hours in the closed mode. We all know the feeling - there's a lot to do and we have to get through it.

Embedded in the closed mode is a sense of importance. Even when ticking off items from a to-do list, there is a larger sense of importance in that we are making some kind of progress. There is a mild tension in this mode, a little anxiety that propels us forward from task to task.

There are several good things about the closed mode. It's productive. It's natural. It makes a lot of logical sense - we want to achieve so we plan and execute. We close off our minds to tangential curiosity, doubt, scenic routes and back alleys. We move forward.

The open mode
The open mode is a rarer place. It's relaxed, less purposeful, more contemplative - more like play. In this mode, we explore curiosity for it's own sake, unsure of where it may lead and we attain breadth as a result.

This mode is hard to achieve, especially when under pressure, and can be easy to break. It can be perceived as a mode in which we are not taking things seriously, however, we should remember that there is a difference between seriousness and solemnity.

This is the mode in which we are most likely to come up with something original.

The best of both worlds
One mode is not inherently better than the other. If we spend our time only in the open mode we are more likely to be original. If we spend our time only in the closed mode we will achieve.

If we are able to switch between modes at the right moments, we gain the best of both worlds - we achieve something original.

The key is the switch
Much of the presentation is spent explaining how to achieve the open mode and to utilise it. But before I go into that, I want to make something clear.

The argument as I understand it is not that you go into open mode, spend an hour, and then come out the other side with an well-formed insight and just carry merrily forward. Instead, at the end of your open mode, you will likely have considered ideas and at least be able to choose a direction to move forward which seems sound enough. You can get behind it, and do some closed-mode implementing to try it on for size.

The reward for having invested time in open-then-closed mode comes later. It's while you are driving, or while you are in the shower, or some moment when you are partially distracted by something mundane.

What you have done by going open and then closed is to feed your subconscious. Bounce around some crazy ideas, and then exercise one of them. A wide angle followed by a zoom lens. All that info goes into the soup of the unconscious, and as Cleese says, at some point your subconscious will reward you with a gift.

Going forward, you can find an appropriate rhythm for switching between open and closed modes, and the idea is you get a continuous flow of rewards. Sounds nice, doesn't it?

So - how to switch?
The assumption in this presentation is that switching from closed mode to open is much harder than switching from open to closed. That's probably true in most cases, and it certainly is for me. Sometimes it can take me a whole day to get to that mindset - but then that's why I'm studying this presentation, looking for ways to improve that.

According to Cleese, to get to open mode you need 5 things:

#1. Space
Clear a physical space of all the usual visual clutter. Get rid of the to-dos, reminders of the outside world, the 'accusatory piles' of things demanding attention (as my friend Judith Stein would say).

#2. Time
Clear out a window of about 1hr30 in your calendar. According to Cleese, it takes about 30mins to clear your way through all the temptations from your mental knowledge if your closed-mind world - so this leaves a good hour to be creative.

#3. Time
Within 5 minutes of the start of your space-time oasis, you will be starting to recount some of your closed-mind to-do list items. Stay the course. Dismiss those items and try to stay focused on your topic.

The temptation for some people is to drop out of the open mode as soon as the first reasonable idea has formed, and not give it the full 1hr30. This feels safe, as then we can get on with the task of implementation and get moving again, particularly if we are under pressure for results. However by dropping out early you are left with the likelihood that your idea may not be that original.

A quick addendum to that: If you are in the situation where time is unavoidably short, you have to choose. Do you want a well-executed but trite idea, or a rushed but original one? This is a genuine question - there may well be times that are fully appropriate for either approach.

The temptation for others is to stay in the open mode too long. This is good in that you leave the possibilities open, but bad in that you are not going to achieve as much. As Cleese suggests the real value is in the flow of new ideas from your subconscious - the discipline of regular mode-switching is more important than getting lost down a single rabbit-hole.

#4. Confidence
In day-to-day life we have a tendency to believe that we must always be right. That we must build, block-by-block on a solid foundation of stuff that isn't wrong.

But in the space-time oasis, there is no such thing as right or wrong. All ideas are good ideas so long as you are having them, and you can just follow the course your instinct takes you on.

#5. A 22-inch waist Humour
Well of course, if you are a comedian. Cleese describes humour as an essential part of spontaneity, that it is a vital part of all creativity. Maybe so but I think a little bias has crept into his thinking here.

As a Brit myself I always have dryness and sarcasm available at a moment's notice. I think the argument about humour here is good, but I also think the same could be said about many other emotions, and/or modes of operation.

Saturday, June 14, 2014

Takeaways from Eyeo 2014

Eyeo is a very different type of conference. From the ground up it's an energetic community of creative technologists of different stripes.

The opening night of Eyeo (Credit: Ben Lower)

To give you some idea how people feel about it, check out this tweet from Kate Crawford:

A tweet from Kate Crawford

There was a lot that went on that was great. I want to take a minute to draw out a few themes.

Data, data, data, data, data
This was the most widely-used term at the festival. Remembering that Eyeo is a festival at the intersection of arts & technology, why such a strong focus on data?

At root something has shifted about our perception of data. The continuing drip-feed of revelations after Snowden is fundamentally changing the way we view our relationship with data - and we know we can't go back. This creative coding community seems hyper-aware of this change, particularly because of the role it increasingly plays in the industries we all work in.

Data analysis, data visualization, data and art, data and surveillance, big data, tiny data, the list goes on. There were talks around all of these, but one talk that stood out for me was Kim Rees.

A Periscopic data visualization
U.S. gun deaths in 2013 visualized

Usually the debate centers around the utility afforded by data versus the risks of misuse. Rees spoke instead of data as a new currency. Something which is traded for value yes, but also data as something local, ubiquitous and capable of independent action. Data, Rees said, will slip fluidly between nanobots, internet-of-things devices and cloud services, reducing our direct interactions with computers and screens.

A new generation of data natives, like digital natives before them, will grow up knowing only a world of ubiquitous data. Designers should get a head start by switching User-Centered Design for "User-Absent Design". As the touchpoints we have with computing get smaller and less command-control, the notion of a user will become outdated.

Rees suggests we embrace the new context, warts and all, because it isn't going anywhere.

Kim Rees is one of the co-founders of Periscopic - tagline: "do good with data".

Learning by breaking
Another idea doing the rounds here is that by breaking a system, by subverting it's intention, we can learn something which otherwise remains invisible.

This is basically the manifesto of 'glitch', art which encourages something unexpected in an otherwise ordered system.

An example of glitch
An example of glitch (Source: Kyle McDonald)

This idea was typified by Kyle McDonald's workshop, in which participants used Hex editors to break image files, and played 'exquisite corpse' with drawing algorithms to generate unexpected results.

In a way, this idea is kind of frivolous. Yes, you can learn something by breaking, but you can also learn by fixing, studying, disassembling etc.

However looking deeper, there's something more energetic about the intention of glitch. For example, the speed with which we are creating new societal norms is increasing, so the speed with which we can understand them is important too. Breaking things is probably the quickest and most accessible way to look at an underlying system.

It seems to be more of a mindset. At first glance glitch appears to be about compression algorithms and transcoding. But more deeply glitch is about control of our destiny. Its a critique of the immersive culture of polished brand and presentation, of built environment and conformity.

If we discover an images underlying patterns, then we show that its appearance is carefully constructed and its human-relevant content just a facade.

The algorithmic and the artistic mindset
The topic of mindsets came up in different forms over the week, and I'm glad it did as I'm in the middle of writing an article about it for P2 magazine.

It was Frieder Nake who expressed the concept in terms of algorithmic and artistic, while delivering his keynote Tuesday night. Nake was well-suited to discuss, being one of the early pioneers of computer art.

A screening of the CLOUDS documentary
Frieder Nake (Credit: Danbarrantes)

The idea goes something like this. At some point, to understand the relationship between computers and art, you have to understand the algorithmic mindset. Nake amused the crowd with a story about painters in the 50s who refused to accept the idea that computers could 'do their job' the way they did it.

However, computers weren't doing the painter's jobs, they were doing a very different job - a job that is expressed and understood by algorithmic thinking.

Once you make that leap, you have two broad creative mindsets open to you as a practitioner - algorithmic and artistic. Nake seemed to be saying that you need to employ both to be successful. What he expressed in no uncertain terms was that algorithmic thinking alone would not get you there.

RGBDToolkit comes of age
One final thought is that I was really glad I went to see James George and Jonathon Minard present the interactive documentary CLOUDS.

I have worked with the DepthKit, aka RGBD Toolkit, many times and have taught and written tutorials on it.

However, it's about more than just the association - I have daydreamed about really effective interactive documentary for quite a while. This is the first really convincing example of it I have seen.

A screening of the CLOUDS documentary
A 'screening' of the interactive CLOUDS documentary on Thursday

George and Minard navigated the documentary on-stage in a kind of "director's cut", although of course in reality every time you experience the documentary it will be different.

Oddly, the thing that makes truly it compelling is the simple fact that it was shot in 3D. It's the missing ingredient of interactive storytelling... more than a cosmetic tweak.

It opens up the experience to a different kind of immersion than that of 2D film and cinema. It brings metaphors from gaming to bear, such as progression, and traversal through space.

Like much interactive narrative, it effectively circumvents the 'voice of authority' issue often found in traditional documentary, and puts the experiencing audience into a new kind of position of power.

It also raises new questions - the voice of authority may be hidden, but it's still there at lower and less obvious layers. A bit like a choose-your-own-path adventure, the illusion of control belies the fact that the possible paths through the system, and the system itself, were all crafted by someone for some purpose.

However the real takeaway here is that it was a privilege to watch. It's something new, exciting, creative, collaborative, and very much in the spirit of Eyeo. There are moments in life when you pay more attention, because you feel like you are watching something really interesting unfolding in front of you.

Eyeo 2014 was one of those moments.

Thursday, March 6, 2014

Hardware Hacker Culture of New York

My co-founder Kent Rahman and I kicked off the Hardware Hack Lab about six months ago now - and the energy just keeps building and building.

Those of us running the lab take our inspiration not from technology but from people and interactions. We believe that innovations gain most of their richness and momentum from the cultural context of their creation.

The lab yesterday evening
The scene at last week's lab

Therefore, our goal is to convene a regular meeting space of openness, breadth, and inquiry - and good times.

In this post I want to outline a few of the key insights that have really made the weekly lab come of age. The lab now unites a diverse crowd of artists, technologists, researchers, and other interested parties. For us, this cross-disciplinary culture is what it is all about.

Hardware, yes - but ultimately people in a space
We are lucky enough to have ThoughtWorks sponsor us with the use of the amazing Gallery space in NYC every Wednesday.

But what to do with that space? Our thinking is this:

This is a city, like many, in which conversation takes place every day about the recent-and-adjacent possible. The most energetic form of that conversation comes in the act of doing, and sharing of knowledge among early-adopting practitioners.

So let's issue an invitation, to have that doing-conversation continue once a week under our roof.

Exploring a custom virtual environment

Time spent in the lab is not for us about formal projects, or project-managed delivery of prototypes (although prototypes can and do emerge).

The real goal is a public space for immersion in the ongoing creative technology conversation.

We ask ourselves:

What are the memes, themes, practices and occurrences going on out there? Can we try them, and learn by doing?

Kent's hands

What types of things can now be done that couldn't be done before? How are they different, who is doing them and why?

We are a small, dynamic space, but the currency of these sessions is in flow of knowledge, information and ideas. It is more by a subjective read of this flow that we might judge ourselves than by anything else. The best way to be immersed a fast-moving and complex technology landscape is to help facilitate and curate the conversation.

Facilitation by mood
When Steven Dale came onboard to help us run the lab, he quickly pointed out what we were missing to push this flow even further. Here's Steve introducing the lab:

We had initially been running the show with a bunch of desks in the corner, under standard fluorescent light. For Steve it was obvious - to build energy we must carefully craft a mood:

"My theory is that light and sound transforms the mood of a space significantly, and should match cycles of activity with the outside world. I think we can replicate this here, cheaply - for hardware hack lab, and possibly every night for people to be in the space to work, under a different mindset."

Lighting, sound, spatial relations and even a lo-fi hardware hack playlist have since emerged. Our events now have the same rhythms and crescendos of a good party - a party that just happens to be flooded with creative people, projects and passion to explore.

A vital partnership - the Volumetric Society
For us to really achieve the mix we were looking for, it was vital we connect with people outside our existing networks.

We started our partnership with the Volumetric Society because they already had a lively, cross-disciplinary, maker/doer-oriented community of participants who attended workshops and presentation-oriented meetups around the city. They also already had a strong emphasis on hardware and it's use in innovative and creative contexts.

Volumetric logo
Volumetric Society of New York

What we offered Volumetric was an addition to their agenda - a regular learn-by-doing session, an opportunity to get participant hands dirty outside the confines of irregular technology-specific workshops they already run.

It turns out it's a small world, and many of our early participants, such as OpenBCI, were already influential in both groups. As part of the whole exchange I have started co-curating the Volumetric Society's event agenda now that founder Ellen Pearlman has begun her doctorate in Hong Kong - so there's a strong bind and we are well-aligned.

A nice storm
So it works - a perfect storm of circumstance, mutual needs and the right energy has created an energetic, vibrant and wholly unique meetup space in midtown New York.

RGBD render 1
RGBD render 2
Renders from depth video shot in the lab

To give some examples - last night I showed a film director, an interactive artist & beat-boxer, and an immersive application developer how to build and record with an open-source depth video toolkit (the renders above are also from that toolkit).

In another corner, Kent showed a crowd of participants how to use Unity3D to design immersive environments for Oculus Rift, while Steve paired with a collaborator on a physical computing project based on a foot-pedal interface.

The lab one week

A couple of Raspberry Pi enthusiasts hacked on the debian-based Pi operating system, while another participant came in and for the first time tried Google Glass.

The atmosphere is always casual
This is a place where people come to meet and hang out, to form connections as well as learn new skills. Our rule is, no particular knowledge required. You are welcome to come look over people's shoulders, try if you feel adventurous, or lead if you want to share your interests.

I want to drive home this point about our philosophy on project-management:

We deliberately avoid setting expectations of particular outcomes. You can't expect production-level technology development to happen in an evening, so don't try.

Instead, do the opposite - and gain the opposite gains. Remove the constraints of 'business objectives' and open the door to serendipity, immersion, and the fun of not knowing what's going to happen or who you are going to meet next.

Strategic collaborations inevitably emerge, and you can continue them in the lab, and/or 'take them offline' (continue them outside lab hours). We've had creative partnerships emerge, and even the formation of proposals for funded projects.

But as good as that is, this isn't about chasing your tail - we start and finish with culture, and work forwards from there.

By 6:30 in the evening the only currency you have left anyway is the inertia from your imagination and your creative passion. Weekly lab time is about spending that currency together, and gaining much more from it than you are likely to gain alone.

Monday, February 10, 2014

Future Visions for Human Interaction

Over the course of an hour at ThoughtWorks last week Ken Perlin described a vision for the future of immersive human interaction.

It was rich and varied subject matter, and it drew a line from Ken's early inspirations right through current research and beyond.

The Internet Society's Joly MacFie was on hand to film it (above), but I'll summarize the thrust of the argument here so you can get a sense. I also want to sprinkle in a few of my own comments and reactions.

What this talk is really about
Over the course of the hour, Ken weaves through a range of subjects including narrative, immersion, imagination, creativity, shorthand pop culture references to the 'future', and human nature - already a lot for one hour.

Add to that a range of technology subjects - wearables, implants, depth & holography, virtual & augmented reality, machine learning, kinematics and software programming - and you find yourself with plenty of rabbit holes to go down.

However, the real vision is all here in this 1-minute clip. In the video below, Professor Whoopee helpfully explains the functioning of a CRT using his 3DBB - his 3-dimensional blackboard (ofcourse). Check it out, and particularly watch how Professor Whoopee uses the 3DBB to communicate and interact with the other characters around him:

The 3DBB is like an immersive environment, in which Professor Whoopee can create and operate a virtual, functioning CRT, which he and the other characters around him can all have a shared, volumetric experience of.

Bearing in mind that context, have a look through a transcript of Ken's closing statements to the audience last Tuesday:

"Eventually, when you and I are face-to-face in an augmented version of reality, and we have nothing but our bodies and our eyes and our hands and each other... then we'll be able to use these very very simple techniques, because we've understood the semantics of how I create something for you in realtime. And what I can offer is this library [of intuitively instantiated, intelligent and directable, interactive yet autonomous virtual objects]..."

"Then the virtual world we have between us becomes something that is not just a replication of our physical reality, but actually the kind of reality we'd really like to be in"

In this vision, we are able to spontaneously create and manipulate logically-consistent virtual objects, or we might say 'directable actors' since they are semi-autonomous. These object/actors surrounding us are not impositions on physical reality. As a consequence of immersion, they are indistinguishable, an effectively inseparable component of reality itself.

The actor/objects are viewable and interoperable in the space between multiple people, similar to how we currently imagine future holographic interfaces.

But we don't interact with these objects by using a portion of space which we would identifiably define as being an interface. The net effect of immersion combined with shared experience is that the virtual-physical reality we inhabit precludes the need for an interface.

All of it, or rather none of it is the interface.

This versus other visions
I think a lot of people will find that hard to imagine. One way to try and imagine it is to contrast it with other visions of the future.

In his Brief Rant on the Future of Interaction Design, Bret Victor takes a decent swipe at the vision presented in this video:

Bret's criticism is to note that the characters in such visions are immersed in an experience which can take on any shape or form (so long as that form blends the possible characteristics of virtual and physical worlds).

And yet to interact with this immersive reality, they turn to their hands and manipulate a little virtual 'phone'.

Interacting with a contrived interface (credit)

It doesn't make sense.

A real, modern-day phone is like a glass window which you can swipe and prod, and there isn't a great deal of immersive haptic feedback.

Compare that to your intuitive sense of your place in a book by the relative density of pages in each hand. Or the amount of water by the shift in weight distribution as you tilt a glass.

Interacting with a book    Interacting with a glass
Interacting with seamless, integral interfaces (credit)

Natural human interactions are richly physical, and both Ken and Victor point out that we are currently going through a very odd transitional stage - walking around with our heads facing down, glued by our eyes and fingers to screens. Visions for future human interaction should allow that as soon as our dependency on physical devices for virtual interaction goes, so too goes that framework of interaction.

If no interface, then what?
Ken's answer is that the spontaneous creation of shared virtual actors will become a new staple of human communication, in much the same way that the ability to instantly communicate by video with people the other side of the world became a staple of human communication.

These actors will be scriptable and responsive to their environment, much like real actors on a stage. But hang on - we've already seen this type of thing recently haven't we?

The big difference between this and Ken's vision is that in the PS4 you enter into a contrived interaction with a specific subset of actors and scenarios. Because the hardware is not a seamless, integral element of your everyday experience, the user-experience case for being able to instantiate your own actors at will is much weaker.

This begs another question. If the interactions in Ken's vision are not to be contrived, doesn't this rely on each individual user to craft and nurture their own individual libraries of symbolic 'actors'? Or do most people subscribe to commercially available 'packs' of actors, and combine them to curate their own unique library?

Questions start flooding in. In what way does a library contribute to, or become reflective of, a person's personality? And tangentially, if these actors can't provide haptic feedback, are they really so different from the Minority Report interface?

Perhaps a further modification. What if it is the ability to transfer virtual characteristics to physical objects, and have those physical objects respond in a haptically-meaningful way that will provide the most engaging experience?

Yes, great conjecture - but is any of this even possible?
Ken spends most of his talk explaining how far he and his students have come, and how he expects technology to develop in the near future. I'd recommend watching if any of this interests you.

We will keep an eye on Ken's work over at Volumetric and catch up with him again to explore these questions in the future.

Tuesday, January 7, 2014

OpenBCI Nears it's Kickstarter Goal

With just two weeks to go, OpenBCI is closing in on it's $100,000 Kickstarter target.

They've been featured in Forbes, Fast Company, CNet, and ofcourse the ultimate accolade - they've worked with us over at Hardware Hack Lab!

In the run-up to the hackathon we hosted at ThoughtWorks I quizzed Conor on the purpose of OpenBCI. I think his answers bear repeating here.

First, what is OpenBCI?

"OpenBCI is a new open-source initiative. Our mission is to promote brain-computer interface (BCI) research in a transparent atmosphere, by putting the technology in the hands of the people. We have built a BCI prototyping platform that is entirely open and supported by a growing community of hardware and software engineers and makers."

Why OpenBCI, why now?

"OpenBCI has no proprietary algorithms. The barrier of entry is slightly higher than existing commercial BCIs because the software frameworks are still in their infancy. That said, as the open-source community adopts OpenBCI, we hope that the barrier of entry rapidly broadens, allowing makers of all skill levels to begin doing research and development surrounding the human brain and body."

Can you explain the differences between what you are doing and what other open source players are doing, for example OpenEEG?

"Primarily, we are trying to lower the barrier of entry into research-grade EEG. OpenEEG is an amazing initiative, but the platform can be a bit intimidating to newcomers. Most contributors to the OpenEEG project are well-versed in electrical engineering and have the aptitude and/or training to get down and dirty with datasheets and circuit diagrams. With OpenBCI we are looking to provide the open-source community with more flexibility than commercial devices like NeuroSky & Emotiv (which have fixed electrode configurations), while at the same time keeping the barrier of entry low."

Go on.

"In a sense, we want to create the Arduino of BCIs. Arduino made electronics prototyping easy and accessible for everyone from electrical engineers to grade school kids. It's our hope that OpenBCI does the same thing, but for synthesizing the digital signals of the human body. The OpenBCI hardware that is going to be kickstarted will most likely have an integrated microchip, making OpenBCI a programmable BCI with no set electrode configuration - a perfect tool to let the open-source community figure out what non-invasive BCIs are capable of."

"I hope that clarifies our mission a bit."

Yes, it does, thank you!

Friday, December 27, 2013

Sound Chamber (2013)

In what ways can a human body control continuously-generated synthetic sound?

That's the question that drove Alex Hornbake and I to create Sound Chamber, an exploratory audio-visual installation presented at ThoughtWorks New York in 2013.

This project was one of the outcomes of Hardware Hack Lab, a co-working space for technologists and artists run every Wednesday here in New York.

In it we took a close look at full-body interaction using commercial depth devices such as the Kinect or Asus Xtion. We tied these devices up to a custom-designed sound synthesis environment built with SuperCollider, and out also to video screens placed around the space.

Alex & I considered this project a creative exploration, and the installation an unfinished demo - an outcome of the exploration. As such, all the code is up on Github along with instructions to replicate. We feel pretty good about adding this mode of interaction to our toolbox and plan to experiment more in the new year.

Wednesday, November 20, 2013

OpenBCI Hackathon at ThoughtWorks

This weekend we held the first OpenBCI (Brain Computer Interface) hackathon at ThoughtWorks. We went from Saturday morning to Sunday night working on challenges of design, community, hardware and software.

Conor Russomanno gives a crash course in EEG
Conor Russomanno gives a crash course in EEG at the hackathon

We have been working with the OpenBCI guys since they started attending our weekly Hardware Hack Lab, also out of the ThoughtWorks New York office.

OpenBCI are Conor Russomanno and Joel Murphy. These guys are serious about lowering the barrier of entry to research-grade EEG. Commercial sets like Neurosky do this but tie your hands in the process. OpenBCI is about making a viable open source alternative.

Aisen Caro poses with a headcap
Aisen Caro poses with a headcap

We had four sets of caps and hardware, and over the course of the two days different volunteers wore the gear so that we could test on real people.

Here's a short video of Joel demoing the application of a cap.

There was a good buzz of excitement and energy and we would definitely host an OpenBCI hackathon again. Looking forward to the next one.