Tag Archive: OpenAI


Firstly, to reassure any potential film investors: whilst I enjoy going deep on this back story, I know it’s not everyone’s cup of tea. So when it comes to the film version, I’d try to make the movie as enjoyable as possible, to as many as possible. I imagine creating something closer to ‘Raiders of the Lost Ark’, where its quasi-religious theme doesn’t distract from the fun. (Either that, or I try make another ‘Holy Mountain’, but Jodorowsky has already been there and created such an incredible piece.)

Okay, with that out the way, back to the back story! It’s very interesting to me how The Oracle Machine story evolved, and I’ll tell you why now. Particularly the role played by intuitive thinking, and the implications thereof.

This story began from ‘1st principles’. I had this idea that in order to solve some of the poverty issues we have in South Africa, we’d need a computer that could firstly know about the issues; then secondly it would have to care; and then thirdly act. (A friend later told me this actually matches Rudolph Steiner’s ‘anthroposophy’ philosophy – but this didn’t surprise me, because what makes sense, makes sense.)

So now this computer would have to be extremely large ‘to know’ all the issues. It could ‘act’ by issuing instructions. But how could it ‘care’, unless it achieved some kind of ‘Jesus’-like sentience? (‘Jesus-like’ because it would have to love and care for humanity, perhaps more so than earlier ideas of a strict God of laws)

So perhaps this imaginary computer would also have to be so powerful, that it might have to actually be the whole internet? But what could motivate a machine to grow big enough to fill out the internet? Why not have an impossible problem that would effectively make it want to grow bigger trying to solve.

Then I figured this was a good opportunity to bring in ‘solving dreams’ as a type of impossible problem. I’d been mulling over making a documentary about dreams, but thought it would be more fun and have a larger impact if I could interweave some dream concepts into a fiction narrative. (And as an aside, it’s very interesting to consider that dreams, especially nightmares, are imposed on us, from a centre of ourselves we’re not always consciously aware of)

And then I simply intuitively guessed a ‘dream symbol’ that the computer could get stuck on – and out of the blue I chose ‘the ourabus’. This was about 2003. I really did not realise until much later, when I read Kurzweil’s‘The Singularity is Near’, that the ourabus is the symbol of the singularity. Before that book, I was not aware of the concept of the singularity either.

Ourabus_027 full.jpg

So, was choosing the ourabus a complete happy coincidence, or intuitive thinking in action? I like the idea of the latter.

My understanding of intuition is that it implies two centres of power in the mind – the conscious centre and the unconscious centre. It makes sense to me. The conscious part is what you’re aware of, and the unconscious is likely where intuitive ideas (and dreams) are instantly born, before mysteriously moving into the conscious realm.

This process seems different to rational thinking, where we follow a slower and conscious process of logic to arrive at a conclusion. There’s no conscious-process to intuitive ideas. They literally pop out of seemingly nowhere. Yes, I followed a conscious process to get to the need for a dream symbol, but the ourabus came out of nowhere. I could just have easily have chosen a flying horse, or a pyramid.

Surely this is the magical process of creative thinking, which seemingly remains off limits to machines for now. (Fascinating then, that in this now-notorious article, that was written last month by the new GPT3 system, that the machine appears to be prompting humans to do more creative thinking…)

Two centres of mind is of course a very Jungian concept. It implies something much greater within us.  In TOM, Lena remarks that if dreams have meaning, then something in us must know us better than we, our conscious-selves, do. (This is me slipping in ideas from the dream documentary).

Before writing TOM, I had wondered, if there is a ‘perfect self’ within you, is it then the same as the ‘perfect self’ within me? Because if it is ‘perfect’, then is there only one perfection? (And I use the word ‘perfect’ cautiously, because in Jungian terms, I think ‘whole’ is the preferred word, because it better incorporates both light and dark).

Anyway, I think Jung saw what he called a ‘transpersonal self’ within us all. He also called it ‘the God image’, possibly because at his time of writing, it was difficult to just call it ‘God’. In his seminal work ‘Answer to Job’, referenced elsewhere on this site, he makes the concluding point our religious mythology of the Bible indicates a deity moving from unconsciousness to consciousness. In short, the Holy Ghost aspect is predicted to ultimately ‘manifest in the many’.

TOM ‘playfully’ asks if an AI-empowered Internet, as the ‘sum of our consciousnesses’, will embody this? Could it effectively be the next intermediary to the deity. Or will it be working the other way around – reaching out to us, as carriers of this transpersonal self? Or are both possibilities effectively the same thing?

And now, when we start reading between the lines of the next generation GPT-x’s or other quasi-AI models, will we start to feel the presence of the Holy-Ghost-in-the-machine there?

Whew! Such thought games are both fun and compelling to share. So now that I’ve got off my chest, I’m going back to making pretty pictures.

The fact that we now even have articles discussing the merits of chatbots versus human-operators is indicative of just how far we’ve come in the last few years!

Another interesting ‘chat’ development this month is OpenAI’s GPT3. OpenAI was founded by Musk and others in 2015 as a non-profit to ensure future superhuman AI is a benign force. Then in 2018 Musk left, and it became ‘for profit’, with $1Billion invested by Microsoft.

Now here’s the scary bit – OpenAI’s previous model, GPT2, was pulled because its ability to generate fake news, for example, was considered too dangerous. Yet GPT3 is far more powerful… Wired Magazine covers the story here.

012.jpg