Building robots is nothing new – there have been DIY kits for years that would make what we call ‘Phase 1’ robots – which are basically coffee machines that do 4-5 basic movements. The difference between Phase 1 and Phase 2 robots is learning.
In the past a DIY kit would contain the assembly parts required to make a piece of metal that would make lots of noise and other things – but really it was just a pre-programmed object.
Phase 2 robots are ones that can learn from their environment and build a digital neural net. There are a couple of things to know – at this point – the home robot hobby kit niche is exploding – all though many people don’t know about it.
There are a number of things you should know about Phase 2 DIY robot kits and how the industry is changing:
It’s All In the Cloud
A neural network to learn from their environment simply requires too much processing power for most DIY robot kits. By putting the processing and storage in the cloud a new world opens up. This is expensive – and htere is a monthly cost to connect to a service like AWS – but doing so will give the robot access to a powerful server computer that can learn and build an AI for the robot as it learns.
This would require your robot to be able to connect to the internet. This is easy with a WiFi switch which can be purchased for around $20-$40.
Putting your robot in the cloud and tapping into the computing power of a mainframe server is one of the most important things for Phase 2 robots.
A robot needs not only to have a camera to see out into the world – but he needs multiple video input devices in order to create a 3d map of his surroundings. Our robots will all have at least 3 cameras to create the depth of field necessary for the robot to render a 3d space.
This 3d space is then sent to a mainframe to process and learn from.
A high quality camera is also required for something a lot more subtle – to recognise the facial expressions and faces of the people around the robot – and to be able to start building a database.
In military situations – a robot would need to know who is a threat and who is a friend. Facial recognition is essential – this is the area where the biggest strides will be made. The I/O problem will be the biggest challenge that future robots will need to do to pass the Turing Test.
First they need very good input and then they need to process the information they receive and create output that would make the person connect with the robot.
Finally – movement is one of the biggest challenges that robots have right now. While walking seems like second nature for us – in the robotics world – simple things like walking and picking up objects is the holy grail.
There is a lot of things that we as humans do – that we don’t really think about – but are things where we know more than what we know that we know. In other words things as simple as picking up a soft fruit like a tomato without squishing it – by testing its firmness and knowing how to hold it – is something robots struggle with a lot.
All of these things require progress in robotics – but right now robots are very limited in their ability to move around. Getting a robot to simply move from one area of a flat surface to another on bipedal (two legged) physical interface is still a challenge.
Robots provide a lot of ethical challenges – and in the future we may see lawyers like Walker Law Group representing robot clients!