02 September 2019 | Nigel Miller
Current AI is essentially clueless, yet we trust it too much, says Nigel Miller.
Current AI is essentially clueless, yet we trust it too much. Voice control technology, such as Alexa or Siri, can seem more believable than using a search engine yourself. We name, humanise and trust the technology, but the information is plucked from the web via algorithms - we don't even know the source's reputability.
GPS devices are only as good as the satellite network and map data. People have driven into hazardous situations because their satnav told them to.
Automatic number plate recognition (ANPR), used in car parks to calculate parking fees, can go wrong. If you input a character incorrectly, the system will not match you with its records and fine you for not having a ticket. Or if the camera doesn't get a clear view of your number plate when leaving, the system will think you have not left.
And how much would you really trust self-driving cars? If this industry adopts black box thinking - as in aviation - for reporting faults, errors and incidents to learn from mistakes and improve, then we could have more faith in it, but we are at the beginning of the self-driving road.
What about in the workplace? Facial recognition, RFID (radio-frequency identification) and location sensing can be useful to navigate and find people and services. But companies drilling down to look at what people are doing and where they are going risk breaking users' trust. If my trust in a system were lost due to human intervention, I wouldn't be happy.
Machines are good at following rules, data analysis, speed, accuracy and repetition. But humans are good at making judgements, empathy, creativity, improvising and leadership. Blindly following a machine is a human error. So don't just go when you see a green light - stop, look and listen first.
Nigel Miller is managing director at Cordless Consultants