‘Hello, I’m Macintosh’.
It’s the 24th January 1984 and Steve Jobs has given the world a glimpse of the future - one in which artificial intelligence (AI) is woven into life’s fabric. It took 30 years for technology to truly catch up. Of course, in the meantime there were efforts at developing and commercialising smart devices, but they largely remained a mix of gimmick and / or the domain of the techie nerd.
Now it almost seems difficult to contemplate life without AI, such is the ubiquitous nature of Alexa, Siri, Bixby and, as of very recently, the BBCs Beeb. They are literally integrated in everything from toasters and TVs to cars and mobiles.
The truly impressive part is in the subtlety.
You are not talking to your fridge; you are talking to Alexa. And they have knowledge of your fridge. It is also interesting to note here how the features of successful technology in some sense mirrors our own.
Remember speaking calculators and watches? Devices could talk for 30 years.
But when we taught them how to listen, that's when they changed the world.
So was Steve Jobs a visionary of technology? Absolutely. But not because he knew more about programming, electronics, etc than others. He was a visionary because he knew that the creation of life-changing technology requires a deep understanding of human emotion.
It requires an innate appreciation of how we interact with technology and the emotional intelligence (EI) to understand how it makes us feel.
Forget the stereotypes - The modern engineer has to be a people person.
But then maybe it's always been thus. AI is the once and future king of technology; developing great AI needs great EI; and they who appreciate the synergy will still hold the keys.
Communication: How is the way you speak to Alexa different to speaking with your best friend? Do you have to speak in different ways to different machines and people? Is that OK to do?
Empathy: Can a computer ever have feelings? Can it every be "sorry" for what it's programming has done - for instance treating you at hospital last, since your injuries were not that serious?
Critical Thinking: Computers will always follow rules but wrong rule can create bad outcomes. For example, if you ask a computer to minimise the number of murders for the rest of time, it may choose to kill everyone now. If AI is the future, how do we make sure we are asking it to do the ‘right’ things?
Read the next blog in the series here.