The Rise of AI
AI origins
For most of this pathway, you'll be learning about the technical details of AI. But before we get into all that, we'd like to set the scene with a little bit of AI history. No, we're not talking about Talos this time. Instead, we're jumping back to a machine called the analytical engine.
The analytical engine was the world's first computer. It was invented by Charles Babbage, an English engineer, towards the start of the 1800s. You could feed it punched cards, which functioned like programmes, and it would respond with a printed answer.
At least, that's what was meant to happen. In the end, Charles Babbage ran out of funding, and never got to finish the project. But the theory behind it was solid. And it got people wondering about something: if the engine had actually been finished, would it have counted as 'intelligent', or not?
While Babbage was working on his analytical engine, he was supported by Augusta Ada King: mathematician, writer, computer programmer, and respectable Countess of Lovelace.
Ada Lovelace (as she's often known) wrote extensive notes about the analytical engine's capabilities. And in 1843, she made an important observation: "the analytical engine has no pretensions whatsoever to originate anything. It can do whatever we know how to order it to perform."
In other words, she was touching upon that modern distinction between computing and artificial intelligence. The analytical engine wasn't 'intelligent', because it could only follow pre-programmed instructions, as opposed to taking the human-like step of 'originating' something new.
Almost a hundred years after Ada Lovelace, a new figure pushed to the forefront of computing, and picked up the question of artificial intelligence. His name was Alan Turing – in a lot of ways, we might think of him as the father of modern AI.
In 1950, he published a paper titled Computing Machinery and Intelligence. In this paper, he wanted to consider the question: are machines capable of thought?
Yes, said Turing. In theory, a machine is capable of human-like thought. That technology was still a long way off, but one day, thought Turing, humanity would manage to build an intelligent machine.
In that paper, Turing also suggested a way to check if a machine can think. This check became known as the Turing Test. There are a few variations, but one of these tests might look a little something like this.
A human evaluator (C) is told to speak with two participants (A and B) via text. One of these participants is a human; the other is secretly a machine. Afterwards, the evaluator is asked a question: of the two participants, can they tell which one was the machine?
If the evaluator struggles to identify the machine, then that machine must have displayed some level of human-like behavior. And in the eyes of Turing, that human-like behavior is evidence of human-like thought.
At the time, this was all theoretical. No machine could have passed the test. But Turing's writings were still influential. This was the very first time that AI had been discussed in such a detailed, deliberate way.
AI golden age
In 1950, when Alan Turing was writing his paper about machine intelligence, it was mostly theoretical. No AI models had ever been built – but it wouldn't take long for this to change.
In 1955, a team of American computer scientists collaborated on a cutting-edge project. Its name was Logic Theorist, and it's generally thought of as the world's very first AI.
Logic Theorist was an Artificial Narrow Intelligence (ANI), which was designed to solve mathematical problems and establish proofs for famous theorems. This was logical reasoning in action – superficially, at least, Logic Theorist was performing a human-like cognitive process.
Logic Theorist was a big leap forward. And it made people realize something. This emerging field of 'intelligent computers' didn't really have a name.
In 1956, a group of leading scientists in the United States – including the team who had worked on Logic Theorist – decided to meet up at Dartmouth College, New Hampshire. There, they formally established the field of Artificial Intelligence, and the name has stuck ever since.
At the Dartmouth Conference, as this event became known, the scientists also came up with some goals for the field. Logic Theorist was just the beginning – they wanted to start building Artificial Intelligences which could use language, self-improve, and think creatively.
The Dartmouth Conference was followed by an exciting couple of decades, which are sometimes referred to as the AI golden age. Inspired by Logic Theorist, more and more scientists started to build AIs.
Most of these AIs were based on an idea called symbolic programming. In simple terms, this meant giving a computer a tree of logical rules. The computer would use this tree of rules to simulate 'reasoning', and 'decision making', and other human-like processes.
Logic Theorist was based on this approach. Another famous example was Eliza, the world’s first AI chatbot. Eliza used symbolic programming to simulate the dialogue of a psychoanalyst, basically just spotting key words and patterns in pieces of text, then generating relevant responses.
You can still find versions of the Eliza chatbot online. Here's an example of a chat with her:
This dialogue isn't perfect. But it's convincing enough that some people who used Eliza, in the 1960s, came away with the impression that they were speaking to an actual person. In other words, the Eliza chatbot could have potentially passed the Turing Test.
AI winter
As we've already talked about, the 1950s and 60s were a golden age in the history of Artificial Intelligence. These decades saw the birth of the first AI models, not just Logic Theorist and Eliza, but plenty of others too.
And it felt like this was only the start. In 1958, the New York Times reported that it was a matter of time before an electronic computer would be able to "walk, talk, see, write, reproduce itself and be conscious of its existence."
People were excited. People were hyped. Funding flowed in from all directions. But as it turned out... this boom wouldn't last for long.
The problem with early AI models was that they were painfully limited in scope. These were 'narrow' AIs in the strictest sense of that word – and scientists were struggling to build anything more complex or advanced.
In one famous example, IBM designed an Artificial Intelligence which could translate Russian sentences into English. But it could only translate very simple sentences – this AI knew no more than 6 grammatical rules, and 250 words.
Over the next few years, the US government invested almost $20 million into AI translators like this one. But the work never really got anywhere. In 1966, most of this funding was cut.
By the 1970s, a lot of people were starting to think that AI was nothing but a gimmick. These models didn't have any real-world uses. They were basically just high-tech toys.
Scientists still strove to build something useful. But as hard as they tried, they couldn't manage it. Computing power became a major bottleneck – even when they thought of more advanced ideas, the technology wasn't there to support them.
As more and more people lost interest, and more and more funding dried up, the field entered a period of time which is often called the AI winter.
The AI winter continued, on and off, all the way into the early 2000s. Though it has to be said, there were still some pretty exciting moments on the way.
For example, in the spring of 1996, the current chess GrandMaster, Garry Kasparov, played a series of games against an Artificial Intelligence named Deep Blue.
Deep Blue used symbolic programming to evaluate hundreds of thousands of chess positions in a single second, then decide how to make the best move. It played 6 games against Kasparov, and while it did lose 4 of them, it impressively managed to win 2.
This was an exciting development for Artificial Intelligence. But again, it was a bit of a gimmick. A chess-playing robot was fun in theory, but just like Logic Theorist and the Eliza chatbot, it didn't really have any useful applications in practice.
AI spring
In 2016, exactly twenty years after Deep Blue faced off against Gary Kasparov, a research laboratory named Google DeepMind successfully developed an exciting new AI.
The name of this AI was AlphaGo – and it was designed to play the Chinese game of Go. While the world looked on, it went head-to-head against revered Go master, Lee Sedol, in Korea.
The game of Go is extremely complex, and much harder to play than chess. Because of this, most people predicted a landslide victory for Lee. But instead, to everyone's general amazement, AlphaGo achieved a stunning 4-1 win.
Now, it's important to understand that AlphaGo and Deep Blue were two different types of AI. As we've already talked about, Deep Blue relied on symbolic programming – a tree of commands and rules.
But AlphaGo relied on something called a neural network. We'll talk more about these a bit later. But in simple terms, a neural network is a web of artificial neurons. These artificial neurons are linked together by thousands of connections, just like a human brain.
Again, we'll get into the details later. But here's the important part (for now): this type of AI is a lot more advanced, and a lot more powerful, than traditional symbolic programming.
For a lot of people, the success of AlphaGo came to symbolize the end of the AI winter. This was the start of an exciting new period often known as the AI spring.
Here's the thing. Neural networks weren't a new idea. Like symbolic programming, they'd been around since the 1950s. But it was only now that computing power was advanced enough to properly unlock their potential.
Along with AlphaGo, neural networks have also been used to build AI models like ChatGPT. It's like the Eliza chatbot, but significantly better – it's so good at generating human-like text that millions of people now use it to help with day-to-day writing tasks.
Along with ChatGPT, the AI spring has also seen other exciting leaps forward in the field of Artificial Intelligence. Google, for example, is building self-driving cars, which use specialized sensors to 'look' at their surroundings, and make sure that they're driving safely.
In the field of medicine, AI can be used to analyze x-rays, to develop new treatments, to invent new vaccines and drugs. In business and banking, it can be used to interpret vast amounts of data.
And don't forget about education! At Kinnu, our researchers are investigating ways to use AI to make the learning experience more adaptive, more high quality, and more accessible.
One thing's for certain: this field has come an awfully long way since the days of Charles Babbage and Ada Lovelace. Artificial Intelligence is real, and it's here, and the world won't ever be the same.