Miles Mathis' Charge Field
Would you like to react to this message? Create an account in a few clicks or log in to continue.

Harri Valpola dreams of an internet of beautiful AI minds

Go down

Harri Valpola dreams of an internet of beautiful AI minds Empty Harri Valpola dreams of an internet of beautiful AI minds

Post by Cr6 Sun Oct 08, 2017 1:57 am

http://www.wired.co.uk/article/harri-valpola-curious-ai-artificial-intelligence-third-wave

Harri Valpola dreams of an internet of beautiful AI minds  (more at link...)

The Finnish computer scientist says he has solved AI's fundamental problem: how to make machines that plan

arri Valpola dreams of an internet of minds. “It will look like one huge brain from our perspective,” he says, “in much the same way that the internet looks like one big thing.” That impression will be an illusion, but we will think it all the same. It is only way our limited human brains will be able to comprehend an internet of connected artificial intelligences.

Valpola has set himself the task of building this future network. But at the moment his goal seems far away. Despite all the advances made in recent years, the Finnish computer scientist is disappointed with the rate of progress in artificial intelligence.

“All of the AI which is currently in use is second rate,” he says. “It's a stupid lizard brain that doesn't understand anything about the complexities of the world. So it requires a lot of data. What we want to build is more like the mammalian brain.”

Valpola, 44, is founder of The Curious AI Company, a 20-person artificial intelligence startup based in Helsinki, which has just raised $3.67 million in funding – small change compared to many tech funding rounds, but an impressive sum for a company that has no products and is only interested in research.

“It is unusual to invest in a research-focus business,” says Daniel Waterhouse, partner at venture capital firm Balderton Capital, which contributed to this and the previous seed round. “But we take a long term view and believe there will be product and business opportunities along the way. This approach also makes for an exciting opportunity for the best talent to flourish and Harri has – and continues to – build a world class team.”

World class, in this context, means academically gifted, and Valpola has the kind of impeccable research pedigree common to many of the computer scientists and mathematicians at the forefront of artificial intelligence. A student of Finnish neural network pioneer Tuevo Kohonen, he spent twenty years at Aalto University teaching and studying artificial brains. But it wasn’t until he left academia in 2007 and applied his theories to the “dirty data” of real-world problems that he realised what he was missing.

Wanting to put his theories into practice, Valpola co-founded ZenRobotics, a startup building brains for intelligent robots. “The original plan was to make a revolution in AI,” he says. But techniques that worked well in the lab struggled to cope with the brutality physical reality.

The first problem was the data. In simulations, the robots could “see” everything around them. The messy, complex physical world was far less visible. The second problem was that the way to get around this – by running millions of tests to see what worked – wasn’t an option. Like their human creators, the robots were slow and corporeal; repetition ground them down before they’d had a chance to develop.

“In the real world,” Valpola says, “interaction is a very scarce resource. Many of the techniques that I use and that have demonstrated amazing results are based on simulated environments. AlphaGo, for instance. It's a really great system, but the number of games that it needs to play in order to learn is – at some point it was 3,000 years of play before it reaches top human player level. That’s lifetimes.”

Unable to achieve its initial aim, ZenRobotics changed course: now its robots pursued the simpler goal of sorting useful raw materials from industrial waste. The startup raised £11 million and attracted some of the biggest recycling companies in the world as customers, but for Valpola, it was a letdown. So, in 2015, he left ZenRobotics to try again.
Read next

The Curious AI Company tackles the problems that stymied ZenRobotics, starting with the difficulty of the data. Valpola’s method is simple: “The best way to clean dirty data is to get the computer to do it for you.” His first attempt was revealed in a paper published in 2015, which described a ladder network: a neural network that trained itself to deal with complicated situations by injecting noise into its results as it went along, like a teacher keeping her students on their toes by throwing mistakes into a test.

The ladder network allowed the computer to learn without a huge collection of pre-labelled examples, a technique known in the field as semi-supervised learning. When it was tested on pictures of handwritten digits, one of the datasets commonly used in the field for benchmarking, the results were stunning. With just 100 initial labeled training examples, the system recognised almost 99 per cent images correctly. Leading computer scientists hailed its “very impressive, state of the art results.”

Valpola carried on developing the technique to help it deal with other kinds of datasets. At this year’s Conference on Neural Information Processing Systems (the leading conference in AI, better known as NIPS), he is going to present a cousin of the ladder network, punningly entitled Mean Teacher. Published tests – this time on pictures of house numbers from Google Street View – show it outstripping previous efforts, even with fewer training examples.

“The results I see in the paper are quite good, and continue to hammer at semi-supervised learning, beating another record,” says Yoshua Bengio, professor of computer science at the University of Montreal, one of the leading figures in deep learning.

Valpola is also working on the other problem he identified at ZenRobotics: AIs’ dependence on trial and error. This is how the most advanced “model free” AIs – ones which haven’t been explicitly coded with the rules of the world they’re encountering – figure out what do to. That’s fine with video games, where it’s possible to try out billions of different scenarios and gradually build a sense of what works, but in the physical world things are never so easy. To function with any fluency here, Valpola says, AIs will need the ability to reason based on relatively little information: to do what humans call planning.

The problem is that neural networks only work in one direction. Give them images of pandas and gibbons and they will whizz through them at high speed, classifying pandas as pandas and gibbons as gibbons. But ask, “What kind of image would you classify as a gibbon?" and they are, as a famous case study showed, flummoxed, even when the supposed gibbon looks almost identical to a panda. “There's a solid theory behind this but it's still striking to see it in action,” Valpola says. “How can a network which is so reliable in one direction be so stupid in the other direction?”

Humans are extremely good at this kind of reversal (as, incidentally, are many animals). You do it every time you want something, then ask yourself, “How do I make it happen?” You might just be debating whether to send your boss an email about holiday or speak to her in person, but in that moment you’re summoning an entire mental model of the world and its nature. Neural networks can absorb current situations and planned actions and use them to make predictions about what will happen. But they can’t, from that, turn around and say, “If you want this, the best thing to do is this.” They are inexorably linear.

Cr6
Admin

Posts : 1178
Join date : 2014-08-09

https://milesmathis.forumotion.com

Back to top Go down

Back to top

- Similar topics

 
Permissions in this forum:
You cannot reply to topics in this forum